Question: How do I get a dashboard if I am in a remote location and have no internet?

I recently thought about this question when watching a documentary on fishing in the Antarctic. There was a need for some data on catch history and they radioed for information which was relayed by voice even though they had a laptop. It turned out that they did not have any internet, so couldn’t send or receive data through conventional methods. This must be the same the world over, right? – Fisherman, researchers, off-gridders and remote communities must all experience times when visual analysis could provide beneficial or even be lifesaving.

So how could we design something to achieve this? 

The one thing all of these groups seem to have in common is a radio transmitter/receiver. This is their connection to the outside world and could be used as data transmission device to transmit the data as audio. The next question is; what do we send? Do we send source data or do we send the entire thing (an image). If I am sending data I imagine I could leverage something like Alteryx to output a TDE and then encode it into an audio format before sending it. The benefit of this is that if the receiving end has Tableau Desktop or Reader, it can be used as an interactive viz. One con is that audio can be prone to interruptions which when sending data could result in a TDE not working. If I am sending the entire thing as something like an image, I don’t need to worry about software versions, missing small bits of data or any technical skill requirements. One con would be that this is just a static image. You are at the mercy of the dashboard developer to answer all of your questions during the development of the dashboard… one other problem is the level of detail you can send, however a single viz with pertinent information should be possible.

Let’s say I know I am going to send it via radio and I decided I would send an entire Tableau dashboard (subscription image). What is there out there that is free and is robust enough to be used in this method? The answer I came up with is SSTV (Slow Scan Television). This is a process of encoding an image that is normally taken in a remote location, changing it into audio format and sending it via radio to a receiver that then decodes them and displays an image on the other side. Think dial-up, but even more low-tech. Interestingly this is the same process for sending images from the Mars Rover to NASA or from other exploratory vehicles in space. If it is good enough for them then it should serve well in this instance – and most importantly, it’s free!

So what will this process look like?

I see this working something like this: Tableau Server Email subscription > SSTV encoder > *Radio Transmission > SSTV decoder

*There are a number of options available to send transmissions including LFR, VHF, UHF and X-Band. As VHF is the most common, we will use that in this experiment.

The biggest consideration with radio is range, which is limited by line of sight without any type of repeater. This means from ground to ground, a normal person standing has a range of around 3 miles. If the transmitter is a 330ft Radio tower, the range is 22 miles. If the tower is on a hilltop it can be upwards of 60 miles… and if you look up, a 23 Watt VHF radio transmitter can communicate with Voyager (+11.7 billion miles away) which would be some kind of world record for sending a dashboard, right?

So what do I need? 

A pair of radios and a pair of computers with encoders/decoders for SSTV installed. I am using some Terrain 750 radios, my iPhone and my iPad (and the assistance from my missus who kindly let me use her phone and did the transmission for me!). I have to admit that because I used my iPhone and iPad I did have to pay someone £2.99 for an app, but to prove it is free and that this is a zero cost solution, you can find a link to a free windows version. It is also available for Mac OS, etc…

Download a copy of a free SSTV encoder/decoder here:

So here it is, the proof of concept test:

Ok, so a few thoughts after this test. The quality was quite low. This means that the viz would need to be bold, but this isn’t too much of a hassle. It would just be a design consideration. The good news is that this does work.


Moving forward…

What could we do to make this better? How about make it request based? Send a request tone and provoke a transmission of a particular viz… that’s certainly possible with some more time, otherwise it would need to be scheduled off the back of the Tableau Schedule. If it is a schedule, have a different “feed” at various frequencies.

Remember this was just a proof of concept. It is nowhere near a complete solution or even the right solution… but it does work. I can now in theory send a Viz to the moon or even out beyond Voyager, which is actually really cool if you think about it, right? Ok, it might just be me who gets a little excited by this geek-out. 🙂 

Thanks for taking the time to follow through this. I know it is not my normal admin view type post, so I hope you found it a little interesting. Next time I will be showing you how to read some VizQL logs for service performance analysis.

Event History – Updated events

Below is an update to the Events that can be found within the event history tables within Postgres. These events are the foundation of any admin view. This list was taken from Tableau Server 10.0.6. All of these events help build insight into user behaviour, which is an area of a Tableau Server analysis I am focusing on at the moment.

How do I start with custom admin views?

I wanted to revisit the start of the admin views. It is easy to get ahead of ourselves and forget how they all started and forget that others too may be going down this road.

If you are looking to create your own admin views then you may be asking yourself “where do I start?” or “what do I need to cover in them?”…

The truth is that every service is different and requires various levels of introspection depending on the type of deployment or load. You will also find that there is no perfect set of admin views and they will always evolve over time.

My approach is the below process:

  1. List your service questions – what do you want to know? Your basic questions being usage based – e.g. Accesses, Publishes, etc…
  2. Wireframe your admin views – get the pen and paper out and design what you would like it to look like. This way you have no technological limit. 
  3. Create your data sources – using your Postgres database or logs, build your queries and data sources to answer your questions
  4. Create your views – build it to your wireframe design. Your wireframe is your guide only, so don’t let your creativity be hindered
  5. Repeat – your answers may prompt further questions. Go through the same process to answer your new questions

Example Wireframe:

The above wireframe is a great start to your admin views and is fairly easy to make so feel free to use it as the basis of your own. Simply use the event history tables from your Postgres database to generate this entire dashboard. More info on the event history tables and how to build this can be found here.

Above Left: This is your portal page for your admin view. Use sparks and small charts to give you you summary information.
Above Right: This is an example of a drill through action to a detailed dashboard for each area. The example shows the extract refreshes. This is your chance to interrogate your data.

Good luck on your admin view journey.

Offline Admin Views

Ok, you have a great set of admin views for your Tableau Server(s), but how useful are they if your server has gone down and all of your monitoring or log analysis has gone down with it? Do you go old school and go through the logs manually or do you use your offline admin views?

This post, although brief, is based on a question I asked myself recently; do I have admin view redundancy?

You and your team have worked hard building your admin views and like my team have published a number of data sources to the server(s) to reuse on multiple dashboards, etc…  as any conscientious Tableau publisher should! However, by only having your dashboards and data sources available on the server – especially when you have included component or log analysis for the purpose of problem solving, you may have inadvertently created a dependency. You are as reliant on the service as your users.

The first part of this answer is simple; make sure you save a copy of your admin views on your network or your local computer (a network share is my preference as multiple team members can access it). It really is very easy to forget to save a local copy somewhere where everyone can access it.

Your data is a different story. If you’re using Postgres, you’re going to lose it unless you have a mirrored version of the db located off the server, your extracted datasources are likely to be out of date in your backups too. In the event your actual server(s) is/are inaccessible for a period of time, your logs (even OS logs) are going to be unavailable also (cringe). BUT – if you’re using Splunk and have a nice little instance collating your logs, you’re in luck.

offline admin views

So… lets recap; your servers are down and you have a copy of your admin views available on your network. Your live Splunk data sources in your admin views are actually still working fine, allowing a fully searchable set of views giving you performance metrics, errors from the logs and even a list of events from the OS, right up to the last second. How great is that?! No guessing, no assumptions on what could had gone wrong… It’s all there, right in front of you.

Of course, you should always have third-party applications and servers such as SCOM monitoring and Netcool doing their bits, but having this admin view redundancy in place gave me a feeling of relief and security – especially as a major part of my problem solving now revolves around the analysis and output from these views.

Overview of what Splunk gives you out of the box for your admin views:

  • Log collation – all of the Tableau Server logs in one place and one format.
  • Perfmon data – key perfmon data such as memory, CPU, disk and network usage.
  • Windows Events – system, application and security logs from the OS giving you everything that happened outside of the application.

With this data alone, I recon, anyone can figure out what happened to their servers, even a non technical user can look at them if they are portrayed well in a few views and pick out the pertinent errors and problems.

I would be interested to hear about your monitoring and admin view redundancy. Do you use admin views to carry out these type of debugging tasks? Feel free to message me via Twitter or something to let me know.

I hope this wasn’t too boring and pointless and you can sleep easy tonight knowing that when that call comes you have the data to hand.

Have a great day!

Event History Audit

You’re sitting at your desk and you get an angry phone call: “Someone has deleted my workbook! Who was it?!”

A call most of us have had at some point in our Tableau career, I am sure.

Lets go one worse, a call from the legal department: “We need to see what ‘Joe Bloggs’ has been doing on Tableau Server – Now!” – It has now become a nightmare and it may be time sensitive as well.

Although these seem like a straight forward questions, it is not always easy to answer them unless you have built the functionality to do so. This is where your custom admin views come into their own with Event History.

Using Postgres we can interrogate the internal Tableau database to answer a number of questions. These questions are generally; Who, What & When – but what particular events can we track?

Here are some common events that you can monitor:

Action Type Event Type
Access Logout
Download Workbook
Download Data Source
Access View
Access Data Source
Create Create Workbook Task
Create System User
Create Site User
Create Schedule
Create Project
Create Group
Create Data Source Task
Add Comment
Delete Delete Workbook Task
Delete Workbook
Delete View
Delete System User
Delete Site User
Delete Schedule
Delete Project
Delete Group
Delete Data Source Task
Delete Data Source
Delete Comment
Publish Publish Workbook
Publish View
Publish Data Source
Send E-Mail Send Subscription E-Mail For Workbook
Send Subscription E-Mail For View
Update Update System User Full Name
Update System User Email
Update Site
Update Schedule
Update Project
Update Data Source Task
Update Data Source
Update Comment
Replace Data Source Extract
Refresh Workbook Extract
Refresh Data Source Extract
Move Workbook To
Move Workbook Task to Schedule
Move Workbook Task from Schedule
Move Workbook From
Move Datasource To
Move Datasource From
Move Data Source Task to Schedule
Move Data Source Task from Schedule
Increment Workbook Extract
Increment Data Source Extract
Enable Schedule
Disable Schedule
Change Workbook Ownership From
Change Workbook Ownership To
Change Datasource Ownership To
Change Datasource Ownership From
Append to Data Source Extract

Something to consider here is the ‘login’ and ‘logout’ events will not be accurate when you have single sign-on, etc.. You can use bootstrap session information to do this accurately.

You can also monitor other things with API calls, etc.. but for now lets focus on these common questions.

We are going to work on the premise that you already have some working knowledge of accessing your Postgres database, so I am not going to cover opening that up in this blog post.

When you’re connected to Postgres, you will want to connect to the following tables using these joins:

event histrory

You will see I have included the joins for the tables. In the bottom left box it shows you an example of the joins required to obtain the relative event information. It is good to note the relationship between ‘Actor’ and ‘Target’. These refer to the person who is carrying out the event (Actor) and the person it affects (Target). Target will only be for events like creating users and deleting users, not the owner of the content.

You can join the main views to the ‘hist_’ tables to this to get extra information on workbooks, etc.. but this will increase the volume of data, so I do not bring that back for my dashboard to keep some sort of good performance.

You can also pick up failing extracts from this if use bring back the “Details” column from the Historical_Events table. The “Details” column is not always populated but is important, so don’t be tempted to leave it out due to it being a wide column.

Once you have this as a data source you can then create a dashboard that looks at it. This is down to your particular requirements, but I have a list of events, a timeline and a dreaded crosstab list (for event details – and yes it is important so I control it by filter actions). Filters are important here as well. I have a free text search parameter that flicks between names, details, etc..  and filters for the Event Timestamp, Event Type and Action Type. Filtering will allow you to quickly search for people and content.

Example part of an Event History Dashboard:

event histrory Example

That is all I have time for at the moment. I hope this goes some way to answer these questions for you. I will do some more posts on this moving forward as time allows. In the meantime I will try to answer your questions if you have any. A special shout out to the CAP admins who requested this info.



Splunk your Tableau

If you are a Tableau Server admin then I am sure you know what what I mean by the “great log search”. The process by which you go wading through the millions of folders, files, data types and rows to identify that error and to take appropriate actions on it.

Wouldn’t it be great if you only had to go to one place and type a username, IP address or other keyword to identify that error? Better yet, what about having a dashboard that shows you what errors have happened when and being able to actually use all of the log data for some proper analysis? – that’s not possible right? … well actually it is and it is easier to set up than you think. The answer is Splunk.

What is Splunk?

  Splunk is actually another analysis tool, however the purpose of this post is not to analyse its visualisation capabilities, but to show that leveraging its indexing and database engine together with Tableau it is a formidable admin tool. 

(Online example)

Once you have indexed your logs Splunk will continue to read the logs on a sample rate. During this it will pick up any changes from the server logs and add them to the indexed logs. It is then available for you to “enrich”, which is to basically identify columns in your data based on a sample set of data. After you have done this you can to generate a “report” (which for our intentions is a ODBC data source). Connect Tableau Desktop to your Splunk server using the Splunk drivers and you will then have a live feed from your logs which is available to analyse.

The Implementation

So, I have skimmed over the basics so far and a few of you may be wondering what steps need to be taken to implement this yourself. I have outlined them below. There will be other posts soon which show greater detail in searches, etc…

Here are some prerequisites:

  • Tableau Server (and admin rights)
  • Tableau Desktop
  • Splunk Enterprise
  • Splunk Forwarder
  • Splunk ODBC connector

Assuming Tableau Server is set up, is running and that you have a copy of Tableau Desktop I will continue.

1. Install Splunk Forwarder. 

To start using Splunk you will need to install the forwarder onto your Tableau Server. This can be found on the Splunk website. The instructions on how to do this can be found here: Install Forwarder. You will need to enter the name of your Splunk server and select the file location of the logs for Tableau Server. These logs reside in the ‘data’ directory of Tableau server (see my other posts for details). I would also select the performance monitoring data which is an option for you in the install wizard, as this will mean that you can now turn off your performance monitoring that I blogged about previously. This is fairly simple although you may need the assistance of your Splunk administrator to set up your Splunk index if you do not have access to do so yourself. The Splunk index is the location that your logs will be recorded to on the Splunk Server. A good name for your Index is “Tableau”. I am not a Splunk administrator so I had help with this bit. When my Splunk administrator confirmed the index was setup I completed the install.

2. Search & Reports.

Once your forwarder is picking up data you will need to go to the Splunk search in your browser. This is the URL for your Splunk server. You will be initially given a search window. This window is the driver for all Splunk queries. It will allow you to search through your data. It will also allow you to create reports in the “Save As” option.

This is a complicated bit and is the basis of all of your data sources, so time needs to be taken to make sure it is setup right. In the search window start by entering “index=tableau” (assuming your index is called Tableau). This will start returning your data. If you have set this up on multiple servers or still can’t find it, enter “host=[your tableau server]”. This should sort out any issues with index names, etc.. As a start, save these results as your first report (eg “Tableau – All”) and we will proceed.

3. Setup the Client connection. 

Assuming that my lack of detail in step 1 hasn’t stumped you too much, you will need to install the ODBC connector on your server (so you can publish your data source) and install it on your desktop so you can build your new Splunk data source. This install will need the name of your server, the port (8089) and your user credentials for Splunk. Again it is a wizard so should be fairly straight forward.

4. Connect to your new Splunk data. 

Once your ODBC connector is set up you will be good-to-go. Open your Tableau Desktop and create a new data source. In your server list will be Splunk. Select this and continue. Enter your server details as out would a normal server. Server name, port (8089), user and password. You will now be able to see the reports. If you have done step 2 then you should find a report called “Tableau – All”. This will contain all of the data from the index and can be used as a table style data source. There can be issues with the raw event data not returning. In this event create a new column in your search dialog box. I will do a follow up on this soon to go into more detail.

The results 

So you have created a datasource… It’s now time for your creative Tableau side to shine. Create a few extra fields in Tableau that use the “contains” function and look for “error”, “warn” or “fatal” within an IF statement. These will be of interest to you as an admin.

I will post some examples of Splunk searches and some dashboards as and when I can get sign off to do so, so in the meantime, have fun!

Implementing Kerberos for Tableau

If you’re not looking to implement Kerberos for Tableau Server and you’re happy with NTLM for Active Directory then you’re probably not going to be reading post to the end, but in the event that you are interested here are a few little questions and answers which may help your implementation. Kerberos is available in Tableau Server 8.3 onwards. 

Why Kerberos?

Kerberos, named after the 3 headed Greek mythological guard dog is a method of authentication for Active Directory. It is generally accepted as a more secure method of authentication because of its encrypted ticketing process which makes it harder to impersonate users. Because of this extra security it is sometimes an IT requirement when dealing with sensitive data.

Domain Pre-check 

Your domain (for what we are interested in) is the network on which your server resides, however it may not be the URL used to access your servers. Your Tableau server may be accessed from but it may actually be located on domain and there is a DNS routing traffic via an alias. This means that your server is actually – this is also known as your FDQN or Fully Qualified Domain Name and needs to be noted for security (not only for Kerberos but for certain SSL applications). If this is the case you could face issues when setting up Kerberos, so it is my suggestion to test this first. It takes a moment and can save a lot of troubleshooting moving forward.

Reverse Lookups

If you are not sure of your FDQN then there is a simple test that can be run from your desktop computer. A reverse lookup. This is a simple process that can usually be run without any special permissions. 

To run this test, open Command Prompt and run the following (changing the server name):


Find the non-authoritive IP address (usually the second part of the response) and then run another lookup on the up address to give the server name.


Again, the non-authoritive response is what you want and will contain the FDQN with the actual domain your server is sitting on.

In a perfect world, you have just done that and the result of the reverse lookup tells you that you already knew the FDQN and have been using it all along (it’s alright for some!).

If you have just discovered that your server is actually sitting on another domain you will need to contact your DNS support and have a change made to your infoblox entry to match the assumed (original) FDQN, making sure that the domains have a full trust implemented. If they don’t, you may find yourself raising a Tableau Case when you try to implement Kerberos. 

Assuming everything has gone well thus far, it’s onto the next (and documented) steps…

Opening your Tableau Configuration window on the server you will find a tab called “Kerberos”. This is gong to be what generates the main scripts for your Kerberos implementation.

After ticking the “Enable Kerberos” box you can then select the button to generate the Kerberos batch file. This file contains the commands to set up the Kerberos handshake. You will now need to employ the help of your Active Directory admin as there are some things not even a Tableau admin can do!

In the generated batch file it will contain:

  • Your service account password parameter
  • Your keytab output location 
  • Relevant set SPN commands 
  • Your ktpass command

So what is an SPN? 

An SPN is a Service Principle Name. These are the URLs from which you get your incoming Tableau traffic, e.g.

An example of a command is:

setspn HTTP/ mycompany\my_service_acc

You may wish to add more SPNs based on your environment. You may have particular DNS settings directing traffic in the event of an outage or something similar. Adding it now will mean you don’t need to make any future changes. The addition of SPN’s has to be done by an Active Directory admin.

Ktpass and keytab

Finally, the ktpass command. You can pass this to your Active Directory admin and they can run this for you as part of the batch file or you can run this yourself if you have passed the setspn commands separately. You only need to be an Active Directory user to run this command so if you want to keep your service account password top secret, this may be the best option.

If you are running this yourself you will want to replace the password parameter with your actual password and enter the output location to somewhere you can access. My suggestion is to create a folder in the ‘Tableau Server’ directory called “Kerberos” to contain the file.

When you run the command you may receive a warning relating to the service account not being mapped, this warning can be ignored as the keytab file will still be created.


Once you have set the SPNs and you have generated your keytab file you can go back to the configuration window in Tableau and select the keytab location and select the test button. You should get an “Ok” message and another message. Take heed to any other messages, but you should not need to set up any delegations unless your domain admin states otherwise. If you need to add delegation, see the Tableau KB article

Once you have started your Tableau service you can then test Kerberos by accessing Tableau Server from both your browser and Desktop. It is important to test both as Tableau Desktop can sometimes revert back to NTLM in the event that there is an underlying issue and it may not be reported clearly so can go undetected. The behaviour should be that no credentials are needed when logging into Tableau Server via either method. You can also look at the httpd error logs to identify errors or warnings relating to the Kerberos authentication.

If the expected behaviour is seen and no errors are returned in the logs then it should be safe to assume that Kerberos is active for Tableau Server. For confirmation you can use Klist commands to display your personal Keeberos tickets and you can ask your Active Directory admin to check for issues tickets. 

Tableau Win32 Error

Ok, so 1 of 3 things has happened for you to be reading this blog post (or you just like to read my ramblings!).

1. Your auto licensing check has failed and your Tableau Server is now unlicensed (yet you know you have a license).

2. You’re uninstalling Tableau Server and you have received an error stating that not all components have been uninstalled.

3. You’re installing/reinstalling Tableau Server and it will not initialise (giving you a Win32 error).

This is a frustrating issue that can cause outages for your service and cause a lot of headaches whilst searching for the cause – all is not lost though, there is a cause and there is a solution I believe. As with most of my posts it hasn’t been verified by Tableau and this is just my own opinion…


Tableau has a process where it checks your license validity between your cold storage (on your server) and Tableau’s online service. This check runs every 15 minutes and is entirely autonomous (usually). If something interrupts this process it can cause a file to become locked called “program”. This is located in the root directory for your Tableau Server install (eg C:\Program Files\Tableau Server\) and contains the handshake for the license check. A cause (I have found) for this issue is an admin remaining logged in (even in a disconnected state) and using tabadmin whilst the service runs a restart under a different account.


If you have experienced this issue, you can delete the “program” file and restart Tableau services (or reinstall if this is the process you are undertaking) to resolve the initial problem, however there is the root cause of the problem that needs to be addressed. The solution to this is to make sure your maintenance windows are well defined and that admins are completely logged off of the server and are not just in a disconnected state. It can be all too easy for someone to close their RDP session instead of selecting log off, so it needs to be engrained into your normal administration processes.

Tableau Timeouts and the V9 change

Ok, it has been another long time since a post so I thought I would ramble on about what today brought. You may have also noticed that I have moved to WordPress. This is because Webr who originally hosted my blog decided to up their fees (thanks guys).

Back to the Tableau work at hand…

Whilst testing the implementation plan for our Tableau Server V9 rollout I was configuring our custom settings. One of these settings is the Apache timeout value which (for us) was increased to cope with the network latency for our overseas users. This required a small change to the HTTPD template file (\\Tableau Server\[version]\Templates\httpd.conf.templ) by adding a keepalive limit above 5 seconds (which is the default).

Without going into too much detail, after some careful analysis on the number of timeouts (during which I increased the timeout value by 1 second per working day), 10 seconds was selected as the best time to use as it saw the best drop in timeouts without increasing the memory too much  (be aware that increasing it too much will harm your service). If you think you are experiencing timeouts then you can find out by looking at the error.log in the HTTPD folder within the logs.. – that’s my good deed done for today!

…This implementation plan test went well but I noticed a small change that I didn’t really expect… Tableau has (after all of their quirky little messages in the template file) voluntarily upped the timeout value to 8 seconds. This is odd as Tableau (as far as I know) has employed a different technique to stop timeouts by allowing sessions to go into an idle state. Was this part of the plan or was this accidentally left in there, who knows!?

Either way, after some deliberation  it was decided to keep the 10 second timeout (not the new default of 8) even though Tableau has had a number of performance increases as the value represented the network performance and if it’s not broken, don’t fix it!

Tonight I will do a little research into time timeouts as I am not one to leave a question unanswered…

I hope everyone else had a fun-filled Tableau day.

Monitoring Tableau Server

I wanted to post something on monitoring without stepping on toes, so I thought I would go over system performance monitoring. This is a really good way of getting valuable information on your server’s actual performance and Tableau’s consumption of resources.

Apologies if the steps aren’t extremely clear – They are meant to be guidelines, so you may need to look before you click 🙂

If you manage a Tableau instance then I am sure you are familiar with the admin views. These are great, but they don’t contain all of the information that you may need. You can go to the PostgreSQL DB and pull data from there, but there is a wealth of information to be obtained elsewhere…

Now, if you are a windows server admin then I am probably teaching you to suck eggs and you can probably skip this blog post… But if you aren’t, continue to read and I hope you learn something that you can use to benefit your Tableau service.

The Performance Monitor

In the Performance Monitor you can set data collectors to output to flat (.csv) files based on what you would like to monitor on your server. This is a very small system overhead for very valuable information. These are used for general window reporting, but they also contain Tableau relative information – information that you and I need to manage a service properly. To get at this information you will need to do the following roughly described steps.

To create your “Tableau” data collector (in windows server), go to:
Start>Programs>Administrative Tools and select Performance Monitor.

On the left of the window you will need to expand “Data collector sets” and right click on “user defined”. Then go to New>Data Collector Set.
Call your data collector set something meaningful… TabPerformance is always a good one.
You will want to manually select your options, so ensure “create manually (advanced)” is selected. Click next to continue.

When you are on the “what type of data do you want to include?” page, select “Performance Counter” as it will contain the best performance monitoring information. You can also collect event information which can be good if you want to create monitoring for errors.. But for this example we aren’t interested. Click next to continue.

Add your performance counters. This is what things do you want to monitor. It may be CPU, memory, I/O, space… The list goes on. In this example we are going to monitor the Tableau specific processes to check on their momory.

Select “Add” and then scroll down the counters list to “Process”. Expand this item and select “Private Bytes”.

Below this window you will see that process names appear. We are only interested in the main Tableau processes, so select the following via the search:
VizQLServer, Backgrounder, Dataserver, TDEServer64, Tabprotosrv, WGServer, Postgres, Tableau, Tabrepo and Tabsvcmonitor. These should give you a really good indication of what is doing what in the Tableau world.
Once these are added, select “OK” and continue. You can select your sample rate which you may want to set to every 30 seconds, but it is up to you.
Click next and select the root directory. Place it in a drive that you can access as a datasource with desktop. If you have desktop installed on server it would make it easier, if not, try a shared drive. You may want to create a folder specifically for your collectors. Select next to continue and then finish without starting… We have some more tinkering before we start monitoring…

You should now have a new collector under the “User Defined” collectors. Right click your lovely new connector and go to “Properties”.
Go to Schedule and add a schedule that runs every day at a specified time. Before your working day is always a good option. You do not want to have an expiry, so don’t click that!

Go to the Stop Condition tab and set a maximum size limit 50-100mb is best in my opinion as it will capture a few days worth of data before it overwrites. Allow the collector to restart when it hits this limit. We do not want to have it continually consume data.

After enabling these, hit “OK” and then select the file for the collector (shown in the large window on the right when your collector is selected).

Right click on the file and then go to properties again. In here you will be able to see everything you are collecting. Change the Log Format to Comma Separated and then change the log name to the name you want your datasource to be. Allow “overwrite” and “circular” in the File tab before applying and closing by the “OK” button.

You are now ready to start your monitoring… Right click on your collector in the left menu and select “Start”.

This will now start pulling data into your log file for the memory consumption of Tableau processes.
From here you will now want to create a datasource in Tableau Desktop, using this log file (as a live connection) and then publish it to your server.

Have fun with your new Tableau monitoring…