VizQL Server Analysis – Server Telemetry

This is the second of the ‘K’ analysis posts and this post will cover the ‘Server-Telemetry’ log entry in the VizQL Server logs. You will find this in the same VizQL log location, but with the K tag of “server-telemetry”.

Ok, we understood and have started using the ‘qp-batch-summary’ log entry, but we want more, we want to find out everything about our workbooks… and we can…

Firstly, the Server Telemetry is the summary of a workbook, so the number of panes in a dashboard, the number of marks, tooltips, filters, etc… but it also can give us the usage of our dashboards, which can be used to form behaviour analysis of your users. Yes, thats right, I said behaviour analysis. We can map the usage of our dashboards to show how our users interact with our content. This means we can either; a) pick up bad behaviour, such as using Tableau Server to download large amounts of data or bizarre filtering. b) see if content is not being used as intended, for example filters not being used or frequently being changed on initial load, meaning a change to the dashboard could be required. c) know ahead of time what a user likes to see and use in a dashboard and give that information to your desktop developer.

I am lucky enough to have a large user base and access to the basic HR data, so I can map geography, language, job type and even gender to dashboard usage to build a profile my users. This means that I can tell what (for example) a male VP in Singapore will want in a dashboard design before the requirement lands in the inbox.

So what data can I get?

Lets look at the main summary information first. I have put the top bits of information below and I have grouped them by their category.

Size & Ratio

  • Device pixel ratio
  • Width
  • Height

Object Count

  • Annotation count
  • Custom shape count
  • Encoding count
  • Filterfield count
  • Mark count
  • Mark label count
  • Pane count
  • Text Mark count
  • Tooltip count

VizQL

  • Load workbook from file (count, min, max, total)
  • Parsing workbook (count, min, max, total)

Query

  • Process query batch (count, min, max, total time ms)
  • Process query (count, min, max, total time ms)

Bootstrap

  • Bootstrap session (count, min max, total time ms)
  • Connect data sources (count, min , max, total time ms)

Sorting

  • Compute sort (count, min, max, total time ms)

Data Source

  • Connect (count, min, max, total time ms)

Note: this is not everything, but is all of the main things that I use in my views. I use the above to identify content with a large number of filters, data source connections or long running queries in a number of views. To also a good place to get information when dealing with a user who has a problem as it is easy to get all of the facts and figures straight away. Along with these pieces of information you get some identifiers so that you can tie these to users, content, etc..

  • Timestamp
  • Username
  • VizQL session ID
  • Workbook names
  • Sheet/View names
  • Device Type

What about the golden ‘behavioural analysis’?


Here is the majority of what is available. It can be quite daunting, but it is most actions a user can do, including the web edit actions.

Action Name

  • zoom-level
  • view-quick-filters
  • view-data-highlighter-range
  • view-data-highlighter
  • validate-type-in-pill
  • validate-formula
  • validate-field-caption
  • undo
  • toggle-zone-title
  • toggle-mark-labels
  • toggle-legend-server
  • toggle-freeform-zone
  • toggle-field-blending
  • table-calc-toggle-secondary-calc
  • table-calc-set-ordering-type
  • table-calc-set-addressing-fields
  • table-calc-edit
  • table-calc-dialog-activated
  • table-calc-close
  • table-calc-change-type
  • table-calc-add
  • synced-change-page-server
  • swap-rows-and-columns
  • sort-from-indicator
  • show-row-totals
  • show-quickfilter-doc
  • show-me
  • show-hidden-fields
  • show-dashboard-title
  • show-col-totals
  • set-style-enum-string
  • set-style-color
  • set-sheet-published
  • set-quantitative-color
  • set-primitive
  • set-port-size
  • set-parameter-value
  • set-mark-size
  • set-item-encoding-type
  • set-filter-shared
  • set-default-shape
  • set-default-color
  • set-dashboard-sizing-with-validation
  • set-categorical-color
  • set-auto-update-server
  • set-all-sheets-hidden
  • set-active-zone
  • set-active-story-point
  • select-zone-parent
  • select-region-no-return-server
  • select-none
  • select-legend-items
  • select
  • run-action
  • revert-workbook
  • revert-story-point
  • restore-fixed-axes
  • render-tooltip-server
  • rename-sheet
  • relative-date-filter
  • refresh-data-server
  • redo
  • range-filter
  • quick-table-calc
  • quick-sort
  • quantitative-mode-quick-filter
  • previous-story-point
  • png-export-server
  • ping-session
  • pdf-export-server
  • pdf-export-options
  • pane-zoom-server
  • pane-pan-server
  • pane-anchor-zoom-server
  • page-toggle-trails
  • next-story-point
  • new-worksheet
  • new-dashboard
  • move-zone
  • move-row-totals
  • move-free-form-zone
  • move-dashboard-edge
  • move-column-totals
  • modify-zone-z-order
  • merge-or-split
  • master-detail-filter
  • level-drill
  • keep-only-or-exclude
  • insert-function-in-formula
  • include-in-tooltip
  • highlight-items-by-pattern-match
  • highlight-items
  • hierarchical-filter
  • hide-zone
  • group-by-table
  • goto-sheet
  • get-world-update
  • get-underlying-data
  • get-summary-data
  • get-storypoint
  • get-show-me
  • get-selection
  • get-drag-zone-resize
  • get-drag-pres-model-for-text
  • get-drag-pres-model
  • get-default-shape
  • get-default-color
  • get-dashboard-drag-drop
  • geographic-search-query
  • ensure-layout-for-sheet
  • enable-themed-highlights
  • edit-schema-caption
  • edit-pill
  • edit-copy-calc
  • edit-calc
  • duplicate-sheet
  • duplicate-fields
  • drop-prepare
  • drop-on-shelf
  • drop-on-dashboard
  • drop-on-calc-editor
  • drop-nowhere
  • domain-quick-filter
  • delete-sheet
  • delete-calculation-fields-command
  • create-type-in-pill
  • create-default-quick-filter
  • create-calc
  • convert-unnamed-fields
  • convert-to-measure
  • convert-to-discrete
  • convert-to-dimension
  • convert-to-continuous
  • close-data-source
  • clear-sheet
  • clear-highlighting
  • clear-calculation-model
  • change-semantic-role
  • change-field-type
  • change-data-type
  • change-alpha-level
  • change-aggregation
  • cell-type
  • cell-size
  • categorical-quick-filter-pattern
  • categorical-quick-filter-mode
  • categorical-quick-filter-exclude-values
  • categorical-filter-by-index
  • categorical-filter
  • calculation-auto-complete
  • build-title-context-menu
  • build-sheet-tab-context-menu
  • build-sheet-list-context-menu
  • build-data-schema-field-context-menu
  • build-data-schema-data-source-context-menu
  • build-data-schema-context-menu
  • bounding-box-pan
  • apply-type-in-pill
  • apply-calculation
  • add-to-sheet
  • add-subtotals
  • add-sheet-to-dashboard
  • add-reference-line
  • add-manual-items-to-filter
  • add-dataserver-data-source

So what can we do from here?

If you want to get to the absolute Nth degree why not build a decision tree or a sankey diagram of top actions to display the data?

Example:

I’m sure you want to see something based on this data so I have quickly created a view to display webedit usage for a single session. A lot of service issues can be caused by users who abuse this functionality, so being able to track and monitor this can be really helpful. A tip for this type of view is to remember how sessions work in Tableau Server. A session can be reused and a user can have multiple sessions, so use sessions and username in your detail or index computation to make sure it takes this into account.


With this you can see a user who was was editing a view. You can see the order of the interactions and when they occurred. Ultimately the user reverted their changes and didn’t save, but this shows what they did and how they did it. More information than this is available, but unfortunately I cannot publish it due to work restrictions (I will have to get permission first).

Anyway, this is it for now, I will endeavour to share some other views I have created in the not too distant future.

Have fun!

Advertisements

VizQLServer Analysis – QP Batch Summary

Ok so you want to start diving deeper into your logs and get some really meaningful information. I am going to cover ‘K’ analysis within the logs over the next coming months, beginning with the ever useful ‘qp-batch-summary’. This is a summary view of this log entry and more information will follow depending on if people want it.

The QP Batch Summary is a log entry within the VizQL Server logs and is captured every time a viz is executed. It contains some really useful information for administrators and will give you unprecedented insight into what your users are doing.

So what does it contain?

Below is an example of what a single entry contains. I have blanked out some parts for obvious reasons and there are a few more pieces, such as compiled query and query type.

Note: I use the compiled query over the abstract query.

So what are the key pieces of information here?

The query – See what the query is that is being requested by a user and their content. Analysis of this can give you the number of times a dashboard uses calculated fields, “CONTAINS” and other string functions, nested queries, overall length, etc… this is gold dust. Not only for you, but also for Tableau Support in the event they need to help you.

Elapsed times – This is the time it took for the query to run. Using this you can understand if a potential issue is ViZQL, HTTP or Data Engine related. As an added extra you can use this to check queue lengths for the Data Engine.

User names – Find out who ran a particular query.

Dashboard – Find out what content the query relates to.

Session IDs – So, so useful for troubleshooting and joining to Data Engine or HTTP Data. Use this as your POI when troubleshooting problems.

The query type – Is it a viz (data integration) or is it a filter (quick filter controller). This will allow you to work out the use of the query and even count the number of filters on a dashboard.

Cache – Was the cache used? What type of cache was used? This will tell you if this was correctly using a cache.

What can you do?

So here is a quick dashboard I put together which shows what you can make in a short period of time. I have created particular KPIs such as Query Complexity, by using query string length over elapsed time. This gives you an idea of the heavy hitters.


A lot more can be done and I will provide some cool calculations that I use for analysing queries. This is probably one of the most important log entries for large implimentations of Tableau Server.

As I am at the Tableau Data17 conference at the moment, this is all I have time for. I will make sure to put a brief summary of the conference shortly. 

I hope this was a good taster of the QP Batch Summary. Have fun!

Question: How do I get a dashboard if I am in a remote location and have no internet?

I recently thought about this question when watching a documentary on fishing in the Antarctic. There was a need for some data on catch history and they radioed for information which was relayed by voice even though they had a laptop. It turned out that they did not have any internet, so couldn’t send or receive data through conventional methods. This must be the same the world over, right? – Fisherman, researchers, off-gridders and remote communities must all experience times when visual analysis could provide beneficial or even be lifesaving.

So how could we design something to achieve this? 

The one thing all of these groups seem to have in common is a radio transmitter/receiver. This is their connection to the outside world and could be used as data transmission device to transmit the data as audio. The next question is; what do we send? Do we send source data or do we send the entire thing (an image). If I am sending data I imagine I could leverage something like Alteryx to output a TDE and then encode it into an audio format before sending it. The benefit of this is that if the receiving end has Tableau Desktop or Reader, it can be used as an interactive viz. One con is that audio can be prone to interruptions which when sending data could result in a TDE not working. If I am sending the entire thing as something like an image, I don’t need to worry about software versions, missing small bits of data or any technical skill requirements. One con would be that this is just a static image. You are at the mercy of the dashboard developer to answer all of your questions during the development of the dashboard… one other problem is the level of detail you can send, however a single viz with pertinent information should be possible.

Let’s say I know I am going to send it via radio and I decided I would send an entire Tableau dashboard (subscription image). What is there out there that is free and is robust enough to be used in this method? The answer I came up with is SSTV (Slow Scan Television). This is a process of encoding an image that is normally taken in a remote location, changing it into audio format and sending it via radio to a receiver that then decodes them and displays an image on the other side. Think dial-up, but even more low-tech. Interestingly this is the same process for sending images from the Mars Rover to NASA or from other exploratory vehicles in space. If it is good enough for them then it should serve well in this instance – and most importantly, it’s free!

So what will this process look like?

I see this working something like this: Tableau Server Email subscription > SSTV encoder > *Radio Transmission > SSTV decoder

*There are a number of options available to send transmissions including LFR, VHF, UHF and X-Band. As VHF is the most common, we will use that in this experiment.

The biggest consideration with radio is range, which is limited by line of sight without any type of repeater. This means from ground to ground, a normal person standing has a range of around 3 miles. If the transmitter is a 330ft Radio tower, the range is 22 miles. If the tower is on a hilltop it can be upwards of 60 miles… and if you look up, a 23 Watt VHF radio transmitter can communicate with Voyager (+11.7 billion miles away) which would be some kind of world record for sending a dashboard, right?

So what do I need? 

A pair of radios and a pair of computers with encoders/decoders for SSTV installed. I am using some Terrain 750 radios, my iPhone and my iPad (and the assistance from my missus who kindly let me use her phone and did the transmission for me!). I have to admit that because I used my iPhone and iPad I did have to pay someone £2.99 for an app, but to prove it is free and that this is a zero cost solution, you can find a link to a free windows version. It is also available for Mac OS, etc…

Download a copy of a free SSTV encoder/decoder here: http://users.belgacom.net/hamradio/rxsstv.htm

So here it is, the proof of concept test:

Ok, so a few thoughts after this test. The quality was quite low. This means that the viz would need to be bold, but this isn’t too much of a hassle. It would just be a design consideration. The good news is that this does work.
Original:


Received:


Moving forward…

What could we do to make this better? How about make it request based? Send a request tone and provoke a transmission of a particular viz… that’s certainly possible with some more time, otherwise it would need to be scheduled off the back of the Tableau Schedule. If it is a schedule, have a different “feed” at various frequencies.

Remember this was just a proof of concept. It is nowhere near a complete solution or even the right solution… but it does work. I can now in theory send a Viz to the moon or even out beyond Voyager, which is actually really cool if you think about it, right? Ok, it might just be me who gets a little excited by this geek-out. 🙂 

Thanks for taking the time to follow through this. I know it is not my normal admin view type post, so I hope you found it a little interesting. Next time I will be showing you how to read some VizQL logs for service performance analysis.

Event History – Updated events

Below is an update to the Events that can be found within the event history tables within Postgres. These events are the foundation of any admin view. This list was taken from Tableau Server 10.0.6. All of these events help build insight into user behaviour, which is an area of a Tableau Server analysis I am focusing on at the moment.

How do I start with custom admin views?

I wanted to revisit the start of the admin views. It is easy to get ahead of ourselves and forget how they all started and forget that others too may be going down this road.

If you are looking to create your own admin views then you may be asking yourself “where do I start?” or “what do I need to cover in them?”…

The truth is that every service is different and requires various levels of introspection depending on the type of deployment or load. You will also find that there is no perfect set of admin views and they will always evolve over time.

My approach is the below process:

  1. List your service questions – what do you want to know? Your basic questions being usage based – e.g. Accesses, Publishes, etc…
  2. Wireframe your admin views – get the pen and paper out and design what you would like it to look like. This way you have no technological limit. 
  3. Create your data sources – using your Postgres database or logs, build your queries and data sources to answer your questions
  4. Create your views – build it to your wireframe design. Your wireframe is your guide only, so don’t let your creativity be hindered
  5. Repeat – your answers may prompt further questions. Go through the same process to answer your new questions

Example Wireframe:


The above wireframe is a great start to your admin views and is fairly easy to make so feel free to use it as the basis of your own. Simply use the event history tables from your Postgres database to generate this entire dashboard. More info on the event history tables and how to build this can be found here.

Above Left: This is your portal page for your admin view. Use sparks and small charts to give you you summary information.
Above Right: This is an example of a drill through action to a detailed dashboard for each area. The example shows the extract refreshes. This is your chance to interrogate your data.

Good luck on your admin view journey.

Offline Admin Views

Ok, you have a great set of admin views for your Tableau Server(s), but how useful are they if your server has gone down and all of your monitoring or log analysis has gone down with it? Do you go old school and go through the logs manually or do you use your offline admin views?

This post, although brief, is based on a question I asked myself recently; do I have admin view redundancy?

You and your team have worked hard building your admin views and like my team have published a number of data sources to the server(s) to reuse on multiple dashboards, etc…  as any conscientious Tableau publisher should! However, by only having your dashboards and data sources available on the server – especially when you have included component or log analysis for the purpose of problem solving, you may have inadvertently created a dependency. You are as reliant on the service as your users.

The first part of this answer is simple; make sure you save a copy of your admin views on your network or your local computer (a network share is my preference as multiple team members can access it). It really is very easy to forget to save a local copy somewhere where everyone can access it.

Your data is a different story. If you’re using Postgres, you’re going to lose it unless you have a mirrored version of the db located off the server, your extracted datasources are likely to be out of date in your backups too. In the event your actual server(s) is/are inaccessible for a period of time, your logs (even OS logs) are going to be unavailable also (cringe). BUT – if you’re using Splunk and have a nice little instance collating your logs, you’re in luck.

offline admin views

So… lets recap; your servers are down and you have a copy of your admin views available on your network. Your live Splunk data sources in your admin views are actually still working fine, allowing a fully searchable set of views giving you performance metrics, errors from the logs and even a list of events from the OS, right up to the last second. How great is that?! No guessing, no assumptions on what could had gone wrong… It’s all there, right in front of you.

Of course, you should always have third-party applications and servers such as SCOM monitoring and Netcool doing their bits, but having this admin view redundancy in place gave me a feeling of relief and security – especially as a major part of my problem solving now revolves around the analysis and output from these views.

Overview of what Splunk gives you out of the box for your admin views:

  • Log collation – all of the Tableau Server logs in one place and one format.
  • Perfmon data – key perfmon data such as memory, CPU, disk and network usage.
  • Windows Events – system, application and security logs from the OS giving you everything that happened outside of the application.

With this data alone, I recon, anyone can figure out what happened to their servers, even a non technical user can look at them if they are portrayed well in a few views and pick out the pertinent errors and problems.

I would be interested to hear about your monitoring and admin view redundancy. Do you use admin views to carry out these type of debugging tasks? Feel free to message me via Twitter or something to let me know.

I hope this wasn’t too boring and pointless and you can sleep easy tonight knowing that when that call comes you have the data to hand.

Have a great day!

Event History Audit

You’re sitting at your desk and you get an angry phone call: “Someone has deleted my workbook! Who was it?!”

A call most of us have had at some point in our Tableau career, I am sure.

Lets go one worse, a call from the legal department: “We need to see what ‘Joe Bloggs’ has been doing on Tableau Server – Now!” – It has now become a nightmare and it may be time sensitive as well.

Although these seem like a straight forward questions, it is not always easy to answer them unless you have built the functionality to do so. This is where your custom admin views come into their own with Event History.

Using Postgres we can interrogate the internal Tableau database to answer a number of questions. These questions are generally; Who, What & When – but what particular events can we track?

Here are some common events that you can monitor:

Action Type Event Type
Access Logout
Login
Download Workbook
Download Data Source
Access View
Access Data Source
Create Create Workbook Task
Create System User
Create Site User
Create Schedule
Create Project
Create Group
Create Data Source Task
Add Comment
Delete Delete Workbook Task
Delete Workbook
Delete View
Delete System User
Delete Site User
Delete Schedule
Delete Project
Delete Group
Delete Data Source Task
Delete Data Source
Delete Comment
Publish Publish Workbook
Publish View
Publish Data Source
Send E-Mail Send Subscription E-Mail For Workbook
Send Subscription E-Mail For View
Update Update System User Full Name
Update System User Email
Update Site
Update Schedule
Update Project
Update Data Source Task
Update Data Source
Update Comment
Replace Data Source Extract
Refresh Workbook Extract
Refresh Data Source Extract
Move Workbook To
Move Workbook Task to Schedule
Move Workbook Task from Schedule
Move Workbook From
Move Datasource To
Move Datasource From
Move Data Source Task to Schedule
Move Data Source Task from Schedule
Increment Workbook Extract
Increment Data Source Extract
Enable Schedule
Disable Schedule
Change Workbook Ownership From
Change Workbook Ownership To
Change Datasource Ownership To
Change Datasource Ownership From
Append to Data Source Extract

Something to consider here is the ‘login’ and ‘logout’ events will not be accurate when you have single sign-on, etc.. You can use bootstrap session information to do this accurately.

You can also monitor other things with API calls, etc.. but for now lets focus on these common questions.

We are going to work on the premise that you already have some working knowledge of accessing your Postgres database, so I am not going to cover opening that up in this blog post.

When you’re connected to Postgres, you will want to connect to the following tables using these joins:

event histrory

You will see I have included the joins for the tables. In the bottom left box it shows you an example of the joins required to obtain the relative event information. It is good to note the relationship between ‘Actor’ and ‘Target’. These refer to the person who is carrying out the event (Actor) and the person it affects (Target). Target will only be for events like creating users and deleting users, not the owner of the content.

You can join the main views to the ‘hist_’ tables to this to get extra information on workbooks, etc.. but this will increase the volume of data, so I do not bring that back for my dashboard to keep some sort of good performance.

You can also pick up failing extracts from this if use bring back the “Details” column from the Historical_Events table. The “Details” column is not always populated but is important, so don’t be tempted to leave it out due to it being a wide column.

Once you have this as a data source you can then create a dashboard that looks at it. This is down to your particular requirements, but I have a list of events, a timeline and a dreaded crosstab list (for event details – and yes it is important so I control it by filter actions). Filters are important here as well. I have a free text search parameter that flicks between names, details, etc..  and filters for the Event Timestamp, Event Type and Action Type. Filtering will allow you to quickly search for people and content.

Example part of an Event History Dashboard:

event histrory Example

That is all I have time for at the moment. I hope this goes some way to answer these questions for you. I will do some more posts on this moving forward as time allows. In the meantime I will try to answer your questions if you have any. A special shout out to the CAP admins who requested this info.

Thanks!

Jake

Splunk your Tableau

If you are a Tableau Server admin then I am sure you know what what I mean by the “great log search”. The process by which you go wading through the millions of folders, files, data types and rows to identify that error and to take appropriate actions on it.

Wouldn’t it be great if you only had to go to one place and type a username, IP address or other keyword to identify that error? Better yet, what about having a dashboard that shows you what errors have happened when and being able to actually use all of the log data for some proper analysis? – that’s not possible right? … well actually it is and it is easier to set up than you think. The answer is Splunk.

What is Splunk?

  Splunk is actually another analysis tool, however the purpose of this post is not to analyse its visualisation capabilities, but to show that leveraging its indexing and database engine together with Tableau it is a formidable admin tool. 

(Online example)

Once you have indexed your logs Splunk will continue to read the logs on a sample rate. During this it will pick up any changes from the server logs and add them to the indexed logs. It is then available for you to “enrich”, which is to basically identify columns in your data based on a sample set of data. After you have done this you can to generate a “report” (which for our intentions is a ODBC data source). Connect Tableau Desktop to your Splunk server using the Splunk drivers and you will then have a live feed from your logs which is available to analyse.

The Implementation

So, I have skimmed over the basics so far and a few of you may be wondering what steps need to be taken to implement this yourself. I have outlined them below. There will be other posts soon which show greater detail in searches, etc…

Here are some prerequisites:

  • Tableau Server (and admin rights)
  • Tableau Desktop
  • Splunk Enterprise
  • Splunk Forwarder
  • Splunk ODBC connector

Assuming Tableau Server is set up, is running and that you have a copy of Tableau Desktop I will continue.

1. Install Splunk Forwarder. 

To start using Splunk you will need to install the forwarder onto your Tableau Server. This can be found on the Splunk website. The instructions on how to do this can be found here: Install Forwarder. You will need to enter the name of your Splunk server and select the file location of the logs for Tableau Server. These logs reside in the ‘data’ directory of Tableau server (see my other posts for details). I would also select the performance monitoring data which is an option for you in the install wizard, as this will mean that you can now turn off your performance monitoring that I blogged about previously. This is fairly simple although you may need the assistance of your Splunk administrator to set up your Splunk index if you do not have access to do so yourself. The Splunk index is the location that your logs will be recorded to on the Splunk Server. A good name for your Index is “Tableau”. I am not a Splunk administrator so I had help with this bit. When my Splunk administrator confirmed the index was setup I completed the install.

2. Search & Reports.

Once your forwarder is picking up data you will need to go to the Splunk search in your browser. This is the URL for your Splunk server. You will be initially given a search window. This window is the driver for all Splunk queries. It will allow you to search through your data. It will also allow you to create reports in the “Save As” option.

This is a complicated bit and is the basis of all of your data sources, so time needs to be taken to make sure it is setup right. In the search window start by entering “index=tableau” (assuming your index is called Tableau). This will start returning your data. If you have set this up on multiple servers or still can’t find it, enter “host=[your tableau server]”. This should sort out any issues with index names, etc.. As a start, save these results as your first report (eg “Tableau – All”) and we will proceed.

3. Setup the Client connection. 

Assuming that my lack of detail in step 1 hasn’t stumped you too much, you will need to install the ODBC connector on your server (so you can publish your data source) and install it on your desktop so you can build your new Splunk data source. This install will need the name of your server, the port (8089) and your user credentials for Splunk. Again it is a wizard so should be fairly straight forward.

4. Connect to your new Splunk data. 

Once your ODBC connector is set up you will be good-to-go. Open your Tableau Desktop and create a new data source. In your server list will be Splunk. Select this and continue. Enter your server details as out would a normal server. Server name, port (8089), user and password. You will now be able to see the reports. If you have done step 2 then you should find a report called “Tableau – All”. This will contain all of the data from the index and can be used as a table style data source. There can be issues with the raw event data not returning. In this event create a new column in your search dialog box. I will do a follow up on this soon to go into more detail.

The results 

So you have created a datasource… It’s now time for your creative Tableau side to shine. Create a few extra fields in Tableau that use the “contains” function and look for “error”, “warn” or “fatal” within an IF statement. These will be of interest to you as an admin.

I will post some examples of Splunk searches and some dashboards as and when I can get sign off to do so, so in the meantime, have fun!

Implementing Kerberos for Tableau

If you’re not looking to implement Kerberos for Tableau Server and you’re happy with NTLM for Active Directory then you’re probably not going to be reading post to the end, but in the event that you are interested here are a few little questions and answers which may help your implementation. Kerberos is available in Tableau Server 8.3 onwards. 

Why Kerberos?

Kerberos, named after the 3 headed Greek mythological guard dog is a method of authentication for Active Directory. It is generally accepted as a more secure method of authentication because of its encrypted ticketing process which makes it harder to impersonate users. Because of this extra security it is sometimes an IT requirement when dealing with sensitive data.

Domain Pre-check 

Your domain (for what we are interested in) is the network on which your server resides, however it may not be the URL used to access your servers. Your Tableau server may be accessed from https://mytableauserver.mycompany.com but it may actually be located on domain mysubdomain.net and there is a DNS routing traffic via an alias. This means that your server is actually mytableauserver.mysubdomain.net – this is also known as your FDQN or Fully Qualified Domain Name and needs to be noted for security (not only for Kerberos but for certain SSL applications). If this is the case you could face issues when setting up Kerberos, so it is my suggestion to test this first. It takes a moment and can save a lot of troubleshooting moving forward.

Reverse Lookups

If you are not sure of your FDQN then there is a simple test that can be run from your desktop computer. A reverse lookup. This is a simple process that can usually be run without any special permissions. 

To run this test, open Command Prompt and run the following (changing the server name):

Nslookup mytableauserver.mycompany.com

Find the non-authoritive IP address (usually the second part of the response) and then run another lookup on the up address to give the server name.

Nslookup 16.132.168.15

Again, the non-authoritive response is what you want and will contain the FDQN with the actual domain your server is sitting on.

In a perfect world, you have just done that and the result of the reverse lookup tells you that you already knew the FDQN and have been using it all along (it’s alright for some!).

If you have just discovered that your server is actually sitting on another domain you will need to contact your DNS support and have a change made to your infoblox entry to match the assumed (original) FDQN, making sure that the domains have a full trust implemented. If they don’t, you may find yourself raising a Tableau Case when you try to implement Kerberos. 

Assuming everything has gone well thus far, it’s onto the next (and documented) steps…

Opening your Tableau Configuration window on the server you will find a tab called “Kerberos”. This is gong to be what generates the main scripts for your Kerberos implementation.

After ticking the “Enable Kerberos” box you can then select the button to generate the Kerberos batch file. This file contains the commands to set up the Kerberos handshake. You will now need to employ the help of your Active Directory admin as there are some things not even a Tableau admin can do!

In the generated batch file it will contain:

  • Your service account password parameter
  • Your keytab output location 
  • Relevant set SPN commands 
  • Your ktpass command

So what is an SPN? 

An SPN is a Service Principle Name. These are the URLs from which you get your incoming Tableau traffic, e.g. mytableauserver.mycompany.com.

An example of a command is:

setspn HTTP/mytableauserver.mycompany.com mycompany\my_service_acc

You may wish to add more SPNs based on your environment. You may have particular DNS settings directing traffic in the event of an outage or something similar. Adding it now will mean you don’t need to make any future changes. The addition of SPN’s has to be done by an Active Directory admin.

Ktpass and keytab

Finally, the ktpass command. You can pass this to your Active Directory admin and they can run this for you as part of the batch file or you can run this yourself if you have passed the setspn commands separately. You only need to be an Active Directory user to run this command so if you want to keep your service account password top secret, this may be the best option.

If you are running this yourself you will want to replace the password parameter with your actual password and enter the output location to somewhere you can access. My suggestion is to create a folder in the ‘Tableau Server’ directory called “Kerberos” to contain the file.

When you run the command you may receive a warning relating to the service account not being mapped, this warning can be ignored as the keytab file will still be created.

Testing

Once you have set the SPNs and you have generated your keytab file you can go back to the configuration window in Tableau and select the keytab location and select the test button. You should get an “Ok” message and another message. Take heed to any other messages, but you should not need to set up any delegations unless your domain admin states otherwise. If you need to add delegation, see the Tableau KB article http://kb.tableau.com/articles/knowledgebase/kerberos-delegation-sql

Once you have started your Tableau service you can then test Kerberos by accessing Tableau Server from both your browser and Desktop. It is important to test both as Tableau Desktop can sometimes revert back to NTLM in the event that there is an underlying issue and it may not be reported clearly so can go undetected. The behaviour should be that no credentials are needed when logging into Tableau Server via either method. You can also look at the httpd error logs to identify errors or warnings relating to the Kerberos authentication.

If the expected behaviour is seen and no errors are returned in the logs then it should be safe to assume that Kerberos is active for Tableau Server. For confirmation you can use Klist commands to display your personal Keeberos tickets and you can ask your Active Directory admin to check for issues tickets. 

Tableau Win32 Error

Ok, so 1 of 3 things has happened for you to be reading this blog post (or you just like to read my ramblings!).

1. Your auto licensing check has failed and your Tableau Server is now unlicensed (yet you know you have a license).

2. You’re uninstalling Tableau Server and you have received an error stating that not all components have been uninstalled.

3. You’re installing/reinstalling Tableau Server and it will not initialise (giving you a Win32 error).

This is a frustrating issue that can cause outages for your service and cause a lot of headaches whilst searching for the cause – all is not lost though, there is a cause and there is a solution I believe. As with most of my posts it hasn’t been verified by Tableau and this is just my own opinion…

Cause

Tableau has a process where it checks your license validity between your cold storage (on your server) and Tableau’s online service. This check runs every 15 minutes and is entirely autonomous (usually). If something interrupts this process it can cause a file to become locked called “program”. This is located in the root directory for your Tableau Server install (eg C:\Program Files\Tableau Server\) and contains the handshake for the license check. A cause (I have found) for this issue is an admin remaining logged in (even in a disconnected state) and using tabadmin whilst the service runs a restart under a different account.

Solution

If you have experienced this issue, you can delete the “program” file and restart Tableau services (or reinstall if this is the process you are undertaking) to resolve the initial problem, however there is the root cause of the problem that needs to be addressed. The solution to this is to make sure your maintenance windows are well defined and that admins are completely logged off of the server and are not just in a disconnected state. It can be all too easy for someone to close their RDP session instead of selecting log off, so it needs to be engrained into your normal administration processes.