Using the web-Q API

From Q
Jump to: navigation, search

An API is available for automatically updating the data of a dashboard. The API has a technical reference guide. This page describes how to use the API, with an example.

The API's main purpose is to allow you to automatically update the data for a dashboard. This is necessary if your data comes from a file on your computer (e.g. an SPSS .sav file or a CSV .csv file). If you instead use SQL Data when setting up your Q project, then the data will automatically update live every 24 hours, without needing to use the API at all.

The API can also be used to create a dashboard from scratch, however this only makes sense if you have a need to regularly create new dashboards, their data has a very similar structure, and you have no need to customize the dashboard's layout (that is, the default layout of one table or chart per page is OK). The rest of this page only discusses updating the data of an existing dashboard.

Using the API to automatically update a dashboard

Once you have published your dashboard on web-Q, automatically updating it involves three steps:

  1. Find the Project Secret
  2. Write a script to generate the updated data
  3. Write a script to publish the updated data to web-q.com
  4. Schedule the script to run automatically

Find the Project Secret

The Project Secret is a token that gives you permission to update the project's data without logging in. This token will be saved in the script that publishes the data to web-q.com. To find your dashboard's secret:

  1. Log into the dashboard.
  2. Press Webq gear icon.PNG > YOUR_DASHBOARD_NAME > Settings.
  3. Look for the token to the right of Secret project key for API:. It will look something like this: f47f0b39-60bb-459f-9134-9e288ebc4fd3 (this is not a real secret).

Write a script to generate and publish data

Below is an example script, written in the R programming language that we use ourselves internally to update a dashboard of our responsiveness to Q support tickets. This script assumes that the dashboard has already been created using a previous run of the script and its generated data file (rt-ticket-history.csv).

The first (and largest) portion of the script is dedicated to fetching data from our ticketing system, called Request Tracker, using its HTTP/REST API, creating a data.frame of this data, and then saving the data.frame to a .csv data file.

The last (and smallest) portion of the script then uploads this .csv data file to web-Q.com and updates a dashboard using the new .csv file.

To create your own script to update your dashboard, try taking this script and changing the first portion to generate your own data.frame and .csv data file.

#!/usr/bin/env Rscript

library(httr)


####
# CONFIGURATION
####

####
# Request tracker configuration
####
request_tracker_url <- "https://your-request-tracker-domain/"

# Request tracker requires you to log in and then save your session cookie into this script
# http://requesttracker.wikia.com/wiki/REST#Authentication
request_tracker_cookie_name <- "RT_SID_XX.443"
request_tracker_cookie_value <- "xxxxxxxxxxxxxxxxxxxxxx"

# (You don't need to change this:)
rt_base_url <- paste(request_tracker_url, "/REST/1.0/", sep="")

# Useful for testing (only returns a few tickets):
#tickets_url = paste(rt_base_url, "search/ticket?query=(id=66332%20OR%20id=66353%20OR%20id=59922%20OR%20id=46597)AND%20Status=%27resolved%27&format=i", sep="")
# Full ticket list (takes a few hours to run):
tickets_url = paste(rt_base_url, "search/ticket?query=Status=%27resolved%27&format=i", sep="")

####
# Web-Q Configuration
####

# This _must_ match the name of the data file as it is seen in your Q project (look at the title of the window, after your project name)
upload_filename <- "rt-ticket-history.csv"

# This is the location where we temporarily save the CSV data file before uploading it to web-Q
# This path makes sense for linux:
csv_path <- "/tmp/rt-ticket-history.csv"
# Uncomment this path if on Windows, and replace "you" with your username:
#csv_path <- "C:/users/you/Desktop/rt-ticket-history.csv"

# The project secret from the Dashboard Settings page
project_secret <- "f47f0b39-60bb-459f-9134-9e288ebc4fd3"


####
# Polyfills if running in older versions of R
####

if (!exists("trimws")) {
    trimws <- function(x,left=TRUE,right=TRUE){
        res <- x
        old.locale <- Sys.getlocale("LC_CTYPE")
        Sys.setlocale("LC_CTYPE","C")
        if(left)
            res <- sub('^\\s+', '',res)
        if(right)
            res <- sub('\\s+$', '',res)
        Sys.setlocale("LC_CTYPE",old.locale)
        res
    }
}

####
# Using Request Tracker API to extract all tickets
####

ptm <- proc.time()

set_config(timeout(60))

retry_get = function (url) {
    last_error <- NULL
    r <- NULL
    for (i in 1:10) {
        tryCatch({
            r <- GET(
                url,
                add_headers(Referer = request_tracker_url),
                set_cookies(request_tracker_cookie_name = request_tracker_cookie_value)
            )
        }, error = function(e) {
            last_error <- e
            r <- NULL
        })
        if (!is.null(r)) {
            break
        }
    }
    if (is.null(r)) {
        stop(e)
    }
    stop_for_status(r)
    r
}


r <- retry_get(tickets_url)
ticket_ids_text <- content(r, "text")
#cat(ticket_ids_text)
# e.g.
# RT/4.2.9 200 Ok
#
# ticket/18
# ticket/20
# ...

ticket_ids_matches <- gregexpr("(?<=ticket/)([0-9]+)", ticket_ids_text, perl = TRUE)
ticket_ids <- unlist(regmatches(ticket_ids_text, ticket_ids_matches))
# "18", "20", ...

get_ticket_record <- function(ticket_id) {
    ticket_url <- paste(rt_base_url, "ticket/", ticket_id, "/show", sep="")
    r <- retry_get(ticket_url)
    
    # Convert Field: Value lines into a list.
    ticket_fields_text <- content(r, "text")
    ticket_fields_matches <- gregexpr("(?<=\\n)[^:\\n]+:[^\\n]*(?=[\\n$])", ticket_fields_text, perl = TRUE)
    ticket_fields <- regmatches(ticket_fields_text, ticket_fields_matches)
    ticket_fields <- unlist(ticket_fields)
    tf <- lapply(ticket_fields, function (x) {
        trimws(paste(unlist(strsplit(x, ":", fixed=TRUE))[-1], collapse=":"))
    })
    names(tf) <- lapply(ticket_fields, function (x) {
        unlist(strsplit(x, ":", fixed=TRUE))[1]
    })

    # tf e.g.
    #  $ :List of 23
    #   ..$ id             : chr "ticket/20"
    #   ..$ Queue          : chr "Support"
    #   [...]
    
    # Get the ticket's history
    # Example:
    # /REST/1.0/ticket/20/history
    # RT/4.2.9 200 Ok
    # # 12/12 (/total)
    # 893705: Ticket created by XXX@XXX.com
    # 893706: Outgoing email recorded by RT_System
    # 893707: Outgoing email recorded by RT_System
    # 894286: Status changed from 'open' to 'resolved' by bob

    history_url <- paste(rt_base_url, "ticket/", ticket_id, "/history", sep="")
    r <- retry_get(history_url)
    history_text <- content(r, "text")

    # Scan the history for the first "Outgoing email recorded by RT_System" item.
    # Then look up the history details for the item, and find the ID of the first attachment.
    # Then look up the content of the attachment, and extract the Bcc address from the headers.
    # This is the email address of the dispatcher at the time of ticket creation.
    outgoing_email_matches <- gregexpr("\\d+(?=: Outgoing email recorded by RT_System[\\n$])", history_text, perl = TRUE)
    outgoing_emails <- regmatches(history_text, outgoing_email_matches)
    outgoing_emails <- unlist(outgoing_emails)
    dispatcher <- ""
    if (length(outgoing_emails) > 0) {
	first_outgoing_id <- head(outgoing_emails, 1)
        first_outgoing_url <- paste(rt_base_url, "ticket/", ticket_id, "/history/id/", first_outgoing_id, sep="")
        r <- retry_get(first_outgoing_url)
        first_outgoing_text <- iconv(content(r, "text"))
        attachments_matches <- gregexpr("(?<=Attachments:)[\\s\\n]+[\\d]+(?=:)", first_outgoing_text, perl = TRUE)
        attachments <- regmatches(first_outgoing_text, attachments_matches)
        attachments <- unlist(attachments)
        if (length(attachments) > 0) {
            attachment_id <- trimws(head(attachments, 1))
            attachment_url <- paste(rt_base_url, "ticket/", ticket_id, "/attachments/", attachment_id, sep="")
            r <- retry_get(attachment_url)
            attachment_text <- iconv(content(r, "text"))
            bcc_matches <- gregexpr("(?<=BCC: )[^\\n]+(?=\\n)", attachment_text, perl = TRUE)
            bcc <- regmatches(attachment_text, bcc_matches)
            bcc <- unlist(bcc)
            if (length(bcc) > 0) {
                dispatcher <- head(bcc, 1)
            }
        }
    }

    # Scan the history for "Correspondence added by XXX" to determine:
    # a.   Number of emails sent by the client for a ticket
    # b.   Number of emails sent by us for a ticket
    # c.   Names of all the people who sent emails by us (e.g., did Tim and Steve respond to a particular ticket).
    all_correspondence_matches <- gregexpr("(?<=(Correspondence added by )|(Ticket created by ))[^\\n]+(?=[\\n$])", history_text, perl = TRUE)
    all_correspondence <- regmatches(history_text, all_correspondence_matches)
    all_correspondence <- unlist(all_correspondence)
    # Staff correspondence is recorded by their name
    # Client correspondence is recorded by email address
    client_emails = Filter(function(x) grepl("@",x), all_correspondence)
    staff_emails = Filter(function(x) !grepl("@",x), all_correspondence)

    # Time of the first and last email sent by us.
    # Get history item IDs of correspondence by staff
    correspondence_matches <- gregexpr("\\d+(?=: Correspondence added by \\w+[\\n$])", history_text, perl = TRUE)
    correspondence <- regmatches(history_text, correspondence_matches)
    correspondence <- unlist(correspondence)
    first_id = head(correspondence, 1)
    last_id = tail(correspondence, 1)
    
    # Find the creation time of the correspondence history items.
    get_history_creation_time = function (history_id) {
        item_url <- paste(rt_base_url, "ticket/", ticket_id, "/history/id/", history_id, sep="")
        r <- retry_get(item_url)
        item_text <- content(r, "text")
        # iconv hacks around weird unicode bug, e.g.
        # > str(item_text)
        # Error in str.default(item_text) : invalid multibyte string, element 1
        item_text <- iconv(item_text)
        item_matches <- gregexpr("(?<=\\nCreated: )[\\d:\\- ]+(?=\\n)", item_text, perl = TRUE)
        item <- regmatches(item_text, item_matches)
        unlist(item)
    }
    first_created <- ""
    last_created <- ""
    if (length(first_id) > 0) {
        first_created <- get_history_creation_time(first_id)
        last_created <- first_created
        if (first_id != last_id) {
            last_created <- get_history_creation_time(last_id)
        }
    }
    
    c(ticket_id,tf$Queue,tf$Owner,tf$Created,tf$Requestors,
      # Date of the final time it was set to Resolved.
      tf$Resolved,
      length(client_emails),
      length(staff_emails),
      paste(staff_emails, collapse=","),
      first_created,
      last_created,
      tf$Subject,
      paste(all_correspondence, collapse=","),
      dispatcher
     )
}
xx <- lapply(ticket_ids, get_ticket_record)

df <- data.frame(
    TicketID=unlist(lapply(xx, function(x) x[1])),
    RTQueue=unlist(lapply(xx, function(x) x[2])),
    Owner=unlist(lapply(xx, function(x) x[3])),
    EmailReceivedTimeUTC=unlist(lapply(xx, function(x) x[4])),
    Requestors=unlist(lapply(xx, function(x) x[5])),
    ResolvedTimeUTC=unlist(lapply(xx, function(x) x[6])),
    NumClientEmails=unlist(lapply(xx, function(x) x[7])),
    NumStaffEmails=unlist(lapply(xx, function(x) x[8])),
    StaffEmailers=unlist(lapply(xx, function(x) x[9])),
    FirstResponseTimeUTC=unlist(lapply(xx, function(x) x[10])),
    LastResponseTimeUTC=unlist(lapply(xx, function(x) x[11])),
    Subject=unlist(lapply(xx, function(x) x[12])),
    EmailSenderChain=unlist(lapply(xx, function(x) x[13])),
    Dispatcher=unlist(lapply(xx, function(x) x[14]))
)

time_taken <- proc.time() - ptm

#print(time_taken)

tickets <- df



####
# Update the Q Support dashboard using the new data
####

# write the data.frame to a .csv file for uploading
write.csv(tickets, csv_path)

# Upload the data file
WEBQ_BASE <- "https://www.web-q.com"
UPLOAD_PATH <- "/API/Upload"
upload_url <- paste(WEBQ_BASE, UPLOAD_PATH, sep="")
r <- POST(upload_url, body = list(file = upload_file(csv_path)))
stop_for_status(r)

# response will give us an UploadID to use with updating the dashboard's data
upload_id <- headers(r)$UploadID

# import the updated data file into the dashboard
update_url <- paste(WEBQ_BASE, "/API/ImportUpdatedData?project=", project_secret, "&data_name=", upload_filename, "&upload_id=", upload_id, "&partial=false&abort=OnWarning", sep="")
r <- POST(update_url)
update_status <- r$status_code
if (update_status != 200) {
   update_response <- content(r, "text")
   stop(paste("ImportUpdatedData failed with status ", update_status, ": ", update_response, sep=""))
}

# Example of successful server response:
# > content(r, "text")
# [1] "<!DOCTYPE html>\r\n<html>\r\n<head>\r\n    <title>API/ImportUpdatedData Output</title>\r\n</head>\r\n<body>\r\n<h1>Success</h1>\r\n\r\n\r\n</body>\r\n</html>"

Schedule the script to run automatically

Windows

  1. Follow this guide to start setting up a scheduled task, and determine how often you want to run the task, and therefore update the dashboard.
  2. In the Program/script text box enter the full path to your Rscript.exe, e.g.,
    C:\Program Files\R\R-3.2.3\bin\Rscript.exe
  3. In the Add arguments text box enter the full path to your .R script, e.g.,
    C:\Users\you\Desktop\update-dashboard.R

Linux

  1. Ensure your .R script file has
    #!/usr/bin/env Rscript
    as the first line.
  2. Ensure the file is executable:
    chmod +x update-dashboard.R
  3. Add a cron job to run the script at a scheduled time, e.g.,
    0 11 * * 4 $HOME/bin/update-dashboard.R
    will run the R script every Thursday at 11 AM.