Home > Blog > Tech
Subscribe

Consuming data: comparing FTP and web services

At Xignite we consume data through a wide variety of formats, such as FTP and web services, and we run tens of thousands of checks against our own services. Let’s look at the differences in consuming data through FTP and web services.

When consuming data through FTP you have to:

  1. Log into the FTP site.
  2. Navigate to the appropriate location.
  3. Download the correct file.

Let’s take a look at Microsoft’s own documentation for those first 3 steps:

// Get the object used to communicate with the server.
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://www.contoso.com/test.htm");
request.Method = WebRequestMethods.Ftp.DownloadFile;

// This example assumes the FTP site uses anonymous logon.
request.Credentials = new NetworkCredential ("anonymous","janeDoe@contoso.com");

FtpWebResponse response = (FtpWebResponse)request.GetResponse();
    
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
Console.WriteLine(reader.ReadToEnd());

Console.WriteLine("Download Complete, status {0}", response.StatusDescription);
    
reader.Close();
response.Close();  

http://msdn.microsoft.com/en-us/library/ms229711%28v=vs.110%29.aspx

But there is still a 4th step: you have parse that data to be able to pick what you want out of it. Here’s a quick example from StackOverflow:

public static List<string> SplitCSV(string line)
{
    if (string.IsNullOrEmpty(line))
        throw new ArgumentException();

    List<string> result = new List<string>();

    bool inQuote = false;
    StringBuilder val = new StringBuilder();

    // parse line
    foreach (var t in line.Split(','))
    {
        int count = t.Count(c => c == '"');

        if (count > 2 && !inQuote)
        {
            inQuote = true;
            val.Append(t);
            val.Append(',');
            continue;
        }

        if (count > 2 && inQuote)
        {
            inQuote = false;
            val.Append(t);
            result.Add(val.ToString());
            continue;
        }

        if (count == 2 && !inQuote)
        {
            result.Add(t);
            continue;
        }

        if (count == 2 && inQuote)
        {
            val.Append(t);
            val.Append(',');
            continue;
        }
    }

    // remove quotation
    for (int i = 0; i < result.Count; i++)
    {
        string t = result[i];
        result[i] = t.Substring(1, t.Length - 2);
    }

    return result;
}

http://stackoverflow.com/questions/316649/csv-parsing

Of course, you’d have to write more code to piece the two together. You’d have to loop over the above for every line in the file, then pick out the data you want- so it would take even more code than the above just to extract a price.

However, the point of this post isn’t walking through all the details of consuming data over FTP. So let’s turn our attention to consuming data through web services. To use web services, I just have to initialize a few objects and issue a call:

//declare and initialize variables
RemoteGlobalQuotes.XigniteGlobalQuotes GlobalQuotesService = new RemoteGlobalQuotes.XigniteGlobalQuotes();
RemoteGlobalQuotes.Header GlobalQuotesHeader = new RemoteGlobalQuotes.Header();
RemoteGlobalQuotes.GlobalQuote Result = null;

GlobalQuotesHeader.Username = "Your token here";
GlobalQuotesService.HeaderValue = GlobalQuotesHeader;

//...then just one line to call the service to get a quote for Jack in the Box
Result = GlobalQuotesService.GetGlobalDelayedQuote("JACK", RemoteGlobalQuotes.IdentifierTypes.Symbol);

//You don't have to parse, that's done for you so just display the last price
Console.WriteLine(string.Format("Current delayed price for JACK is {0}", Result.Last));

The above is significantly less code than FTP, yet does even more such as packaging the result into native C# objects that I can use directly in my code.

Even if you aren’t a developer you can see from the above that your company’s developers are going to be spending significant more time pulling data through FTP as opposed to a web service. To me it has always been a no-brainer: why spend your time on consuming data through legacy delivery systems such as FTP instead of using an easy to consume mechanism such as web services? If getting data is easy, you’re free to spend more time doing what matters to your business: building the application.

Disclaimer: to keep the examples concise, error handling has been omitted. That is something you’d want to include in production code.

AWS Issue

On October 1, 2014 at 3:31 AM Pacific time I was awoken for a series of Pingdom alerts that were escalated to me via PagerDuty from our Shanghai operations team, including:

Hello Matt Purkeypile,
You are assigned 1 triggered incident in PagerDuty:
Please visit the following URL to manage this incident.
https://xignite.pagerduty.com/dashboard
 1) Incident #5429
  Opened on: Oct 1 at 3:20am PDT
  Service: pingdom
  Description: Pingdom Alert: incident #18417 is open for XigniteSuperQuotes (AWS US-East-1) (superquotes.xignite.com)
  Link: https://xignite.pagerduty.com/i/5429
  Escalation Policy: 2013 Ops Rotation
  Details:
  Hi Pager Duty,
  This is a notification sent by Pingdom.
  Incident 18417, check 1291070 XigniteSuperQuotes (AWS US-East-1) (superquotes.xignite.com), is currently open.
  It has been open for 0 minutes.
  Log in to your account at https://my.pingdom.com to see further details and take the necessary actions.
  Best regards,
  The Pingdom Team

By the time I was up and online though, systems were back to normal. Nonetheless a flood of notifications of an outage are nothing to brush off, even if it is the middle of the night and they’re brief. Since this was across our AWS US-East1 based services, I checked Amazon’s status page- but they reported everything operating normally.

Looking further at these Pingom notifications on our dashboard, I noticed the outage was for everything we had running in the AWS US-East1 region. Additionally, I was able to see not only our core services being impacted there, but also completely different stacks in the same region and some of our data providers that we monitor in the same region. At the same time, services hosted outside US-East1 were reported as fine. For example, here’s what I could see for the above alert:

 

AWS Outage Pingdom Details

 

Taking a look at our primary stack in that region I could see everything was up and running fine. However there was a brief blip where the traffic dropped to almost nothing- even though we were up:

CloudWatch Stats

Given that:

  1. We detected a AWS US-East1 outage across multiple stacks of our own and other data providers.
  2. Our infrastructure was up, but traffic dropped to near nothing.

The conclusion at the time was that it was an AWS problem, even if they didn’t say so. In fact, I issued an internal alert to our entire engineering and support teams at 03:57 stating such. A few minutes later, Amazon did acknowledge the problem:

AWS Alert

Of course, this begs the question: what if the AWS problems were longer, say a couple hours instead of a couple minutes? In fact, we had a problem on our BATS real time service last week that was restricted to AWS. To fix it for customers we redirected batsrealtime.xignite.com to an alternate site that wasn’t impacted by this problem. This quickly resolved the problem on the service for our customers, and allowed us to take the time to be sure the issue was truly resolved in AWS before sending traffic back to it.

This is another demonstration of how operations is a differentiator for Xignite, not just something that has to be done. We were able to quickly detect, troubleshoot, recognize the problem, and issue an internal alert- all before Amazon acknowledged it.

Using Pingdom for Basic Third Party Monitoring

Pingdom essentially performs basic health checks of public facing end points. At Xignite, we’ve been a long time user of Pingdom- since 2009. As mentioned previously, we’ve spent years building a highly Xignite specific monitoring system, so why also use Pingdom? There are a few reasons in doing so:

  1. Gives us an independent set of “eyes”.
  2. Tests of our API from many locations across the globe.
  3. Objectively measures our uptime for service level agreements (SLAs).
  4. Provides a status page, http://status.xignite.com, completely independent of any of Xignite’s infrastructure.

This independent set of eyes (#1) from multiple locations (#2) is helpful in detecting general connectivity problems customers might be experiencing due to Internet problems that are outside Xignite’s instrastructure. As an example, it was these independent checks that first alerted our operations staff to the June 2014 DNS incident and our integration of these Pingdom alerts with PagerDuty allowed us to react quickly.

The neutrality in measuring SLAs (#3) is another important aspect. When measuring uptime for SLAs this eases the burden, as there aren’t disputes about what is or is not considered up.

In fact you don’t have to take our word for our reliability, we publish it historically back to April 2013 and updated in real time at http://status.xignite.com.

status.xignite.com

You can always go and take a look yourself to see if there are any serious problems with Xignite’s infrastructure. In the event of an unlikely major failure, this would allow you to check for yourself instead of making a support request. Rest assured though, if you see any problems here Xignite will already be issuing an “all hands on deck” to respond.

To summarize, Pingdom is just another tool in our toolbox to help ensure all systems are up and running normally.

 

 

 

New Account Dashboard

Hello there! We recently rolled out a new account dashboard for our customers which we hope you will like. A couple of the most common questions we get here at Xignite are issues that can be boiled down to usage and API tokens. Hopefully this addresses those questions and we hope that you will find it helpful. This replaces the old “My Account” page so you’ll automatically see this when you log into your account.

The dashboard as it is today serves 3 main purposes:

  1. Viewing your usage
  2. Managing your API tokens
  3. Updating your account information

To do this we’ve completely swapped out the old individual pages with a new, unified collection of modules that we think is far more intuitive. Check it out for yourself today or see the screenshot below for a quick example:

What’s new

And here are some of the things we’ve added:

  • A dashboard that you can use to quickly glance at your current usage utilization. This lets us show you the important stuff first. We’ll be adding more stuff here as we go.
  • Speed and aesthetics, basically an improved UI. We ditched our old wheels for new ones to save you time and confusion.
  • More usage data, day by day usage charts that you can view across any time period.

 

What’s changed

Other than everything else mentioned, we moved some stuff around. The old links will redirect to the new ones, so your bookmarks will still work. But in case you’re wondering how to get to the old stuff from the dashboard, here’s where they moved to:

Old New
Manage My API Tokens APIs -> API Tokens
Check My Subscription Status APIs -> Subscriptions
Check My Free Trial Status APIs -> Trials
Change My Password Account -> Change Password
Update My Profile Account -> Profile
Manage My Payment Methods Account -> Payment Methods

So there it is. We hope our users will be find this much nicer than what it replaces. And if you have any feedback, please let us know! Send it our way to feedback@xignite.com. <br/ >
For any support issues or bug reports, please visit our support page.

Simple charting in R

R is a popular choice in some circles, and we’ve had several requests for examples on using R with Xignite web services. Personally, I have not used R before but this was surprisingly easy to do with just a little research. I’ll give you the code and then a brief walk through of each section:

library(RCurl)
library(XML)

#call the web service
Result <- basicTextGatherer()
curlPerform(url = "http://www.xignite.com/xGlobalHistorical.xml/GetGlobalHistoricalQuotesRange?IdentifierType=Symbol&Identifier=RBS.XLON&AdjustmentMethod=SplitOnly&StartDate=8/22/2013&EndDate=8/21/2014&Header_Username=YOUR_TOKEN", writefunction = Result$update)
ResultXML = xmlRoot(xmlTreeParse(Result$value()))

#parse the result and draw the chart
DataPoints = xmlElementsByTagName(ResultXML, "Last", TRUE)
plot(unlist(DataPoints), type="o", col = "blue", ylim=c(0, 5), ann = FALSE, axes = FALSE)
title(main = "RBS.XLON 1 Year History", col = "Red", xlab = "Date", ylab = "Price")
axis(2, at=1*0:5)
box()

The first block to the library calls loads the necessary libraries: RCurl and XML. These were libraries that I had to add to the default R implementation, and just consisted of copying the folders into my R library folder.

The second block is where the web service is called. I chose to pull the data as XML and load it up that way for easy parsing.

In the third and final block is where the chart is actually drawn. I leveraged xmlElementsByTagName to go through the XML results and extract all the last data points. I then used plot to draw the actual chart and the next few lines to pretty it up, giving me the end result:

RBS.XLON 1 Year History

Obviously this could be taken further, making the size of the chart dynamic for example. Another improvement to make would be to utilize the “fields” parameter to lessen the data that comes back, as in:

 

So there wasn’t much to using Xignite within R, even though I had never written a line of R before. In fact, I ended up spending more more time playing around with the charting and the basics of R than I did gathering the data from Xignite.

New .NET and Java SDKs for Financial Market Data APIs

Here at Xignite, we know how time-consuming (frustrating, or even intimidating!) it can be for anyone to read through pages and pages of documentation while working with something new to build a solution. That’s why, at the end of the day, our goal is to make accessing financial market data as easy as possible for our customers.

As of today we are releasing our .NET SDK and Java SDK which we hope will make it even easier for people to call our APIs for financial market data. Each SDK comes with support for all our public APIs (over 40!). It also includes all the classes, enums and inline documentation for each one so you’ll never feel lost (for all you intellisense lovers out there).

For a quick overview of the SDKs themselves, head on over to their respective pages:

Please visit the respective SDK pages for information specific to each. In general, they all support:

  1. Making RESTful calls to our webservices
  2. Handling authentication
  3. Supporting Synchronous and Asynchronous calls (Asynchronous is .NET only)

There are still features we want to add, and possibly more languages planned (For all you Python, Ruby and PHP fans…stay tuned). So if you have any suggestions or issues to report, please email sdk@xignite.com.

Utilizing PagerDuty for Critical Operations Alerts

Xignite’s monitoring system is constantly performing health checks to detect problems, or even potential problems. Consequently, the notifications our monitoring system generates fall into one of three general buckets:

  1. Informational. Example: CPU usage is higher than normal on a machine.
  2. This needs to be looked at. Example: archiving some data failed.
  3. This is a critical problem someone needs to look at NOW. Example: third party monitoring is detecting services as down, as in the Zayo DNS incident.

It is that third bucket I’d like to talk about here. Even for staff that is on-duty, how do you alert them immediately when there is a major incident? What if they’re talking with a co-worker or in a meeting? Do you really want the timeliness of your response to be based on how long it takes them to finish up a conversation and get back to their computer?

Xignite’s solution to this, which we’ve had in place for several years, is PagerDuty. With PagerDuty it was easy for us to extend our monitoring systems to immediately notify operations staff through a variety of means, including SMS, email, and phone calls. For myself, I prefer to be emailed and paged first, then start calling my various numbers (cell, office, and home) until I answer:

My notification rules

 

Escalation policies can be defined so that if an employee does not acknowledge an alert, it automatically escalates to the next person on the list, and so on. This continues until someone acknowledges that they’re looking into the problem. These alerts can also be made specific for the problem, so employee A might be alerted for one incident and employee B for another. As would be expected, on-call schedules can also be created. This allows for me to be paged at 10:00 am on a Tuesday for an incident, but our operations staff in Shanghai to be paged at 3:00 am for that same incident instead of waking me up. For example, here is the current schedule for the first person to be called on our Shanghai operations team:

China On Call

With PagerDuty, I can confidently say that operations staff will be immediately alerted in the event of a critical problem. The hidden benefit here is that our operations staff is more productive: we don’t have to have someone constantly staring at the incoming stream of notifications. They can work on other things including continually increasing what we monitor and fine-tuning thresholds for alerting, as well as dealing with the less critical alerts.

Another benefit is that this allows us to quickly get staff on deck in the event of a major problem. Imagine you need to get a handful of senior engineers online at 2:00 am on a Thursday morning.  Without PagerDuty someone has to look up their numbers and start calling them to wake them up. We’ve set up PagerDuty, so that we simply fire off an email and let the system deal with repeatedly calling each team member’s various numbers until they’ve responded. This allows the first responder to “fire and forget” to get help online and immediately get back to dealing with the problem at hand.

All said, I’m extremely happy with PagerDuty. This is just one more thing that demonstrates Xignite commitment to operational excellence. For Xignite, operations isn’t a necessity, it is something that sets us apart.

Sass – CSS with superpowers indeed!

Sass - CSS with superpowers

As a web developer I have been writing CSS from scratch for a while now. Let me rephrase that, I have been forced to write CSS from scratch for a while now. Although I am fairly comfortable doing it, I have never been a big fan of it. CSS can be extremely fragile and in big projects with responsive design, quite a mess to deal with.

The advent of a host of new CSS preprocessors and some new web projects at Xignite gave us -the web team at Xignite, the opportunity to delve into and evaluate some of these. Sass came out the winner and here is why I personally am a big fan.

 

Variables for everyone!

Remember the time when, to change a font color you had to look through an entire css file and replace them one by one? Also keeping in mind not to change the colors of the wrong styles. Well, you don’t need any of that anymore with Sass variables! Define once and use wherever you want, change once and it reflects everywhere!

Now mind you Sass the preprocessor, has two kinds of syntax, SASS and SCSS. SCSS is the newer one and I personally prefer it, reasons require a post in itself but I will link to this which is a nice resource for side by side comparison.

$flat-asbestos: #7f8c8d;
$flat-silver: #bdc3c7;
h4{
   font-family: "Raleway";
   color: $flat-silver;
}

The best part is that this can be extended to small code bits and blocks of css styles that are constantly repeated. These are called @mixins. Consider the following 2 styles

.item{
     background-color: $flat-asbestos;
     text-align: center;
     height: 300px;
    
    -webkit-border-radius: 3px;
    -moz-border-radius: 3px;
    border-radius: 3px;
}
.footer-content{
   -webkit-border-radius: 3px;
   -moz-border-radius: 3px;
   border-radius: 3px;
}

Such blocks of styles are constantly repeated in css files. Especially when we have to deal with webkit, moz and other styles. We can get around this with Sass by using @mixins.

@mixin border-radius-3{
  -webkit-border-radius: 3px;
  -moz-border-radius: 3px;
  border-radius: 3px;
}
.item{
     background-color: $flat-asbestos;
     text-align: center;
     height: 300px;
     @include border-radius-3;
}
.footer-content{
     @include border-radius-3;
}

Nesting for all!

A big pet peeve of mine is the way CSS selectors are organized in regular CSS. Sass lets you naturally nest CSS styles that make it easy to read and apply.

.header{
  text-align: center;
  color: $white;
  h1{
     font-family: "Raleway";
     text-shadow: "4px 3px 0px $flat-wetAsphalt, 9px 8px 0px rgba(0, 0, 0, 0.15);";
  }
}

The h1 style here will naturally apply to h1s inside the “header” class. This follows a very visual hierarchy of styles. Much more intuitive to read and definitely easier to write.

 

Keeping it all DRY

Probably the most impressive feature of Sass is the ability to “inherit” or “extend” styles. This lets you adhere to the DRY ( Don’t Repeat yourself ) principle. Less code duplication means less chances of error and more robust and reliable CSS management.

.message{
  border: 1px solid #ccc;
  padding: 10px;
  color: #333;
}
.success {
  @extend .message;
  border-color: green;
}
.error {
  @extend: .message;
  border-color: red;
}
.warning{
  b@extend: .message;
  pborder-color: yellow;
}

Keeping it all in Visual Studio

Visual studio does not support Sass syntax highlighting out of the box. Neither does it understand the related errors for intellisense to be any good. For this purpose there are multiple add-ons that can help you out. The one that we use is called Mindscape work bench. It lets you create, update and maintain Sass files in your project. It automatically updates the related CSS file when you save your Sass file. This is extremely useful, because if not for this, you would have to use Sass’s built in toolbelt to watch the file and compile it to CSS every time its modified. This is not particularly hard, but its good to be able to not care about it while development.

 

So what about Sass again?

Although I was a bit skeptical initially to introduce Sass to the big web projects on the team, I must say that we have been pleasantly surprised by the results. Sass is basically an extension of CSS (the SCSS syntax). So in any Scss file we can also write regular valid CSS. We can also use Sass’s built in convertors to convert old CSS files to very usable Scss files.

Sass does not change the mentality behind designing in any way. So if the CSS is not well thought out then Sass will not make it any better. It is simply a tool to make it more convenient and efficient. So long as we stick to the best CSS design and organization practices Sass can be of great help in making it even better!

Check out Xignite Labs! It was all built using Sass.

 

Xignite DNS Incident – June 12, 2014

On June 12, 2014, many Xignite customers were unable to correctly resolve xignite.com URLs due to a DNS issue, resulting in API request failures even though Xignite’s production infrastructure was fully operational. This is my first-hand account of what happened. All times below are Pacific.

Read more →

Welcome to Xignite Labs!


Xignite Labs

We are extremely excited to announce the initial launch of Xignite Labs! With Labs, we are laying out a canvas on which our engineers and partners can test the next generation of FinTech ideas, powered by Xignite’s market data APIs.

Our first round of Xignite-powered widgets give you a taste of what’s to come:

Forex currency tiles are widgets that provide real-time exchange rates for 7 major currency pairs inline with historical rates for each pair to provide an at-a-glance view of the current state of each pair.

Type.Ahead showcases how to easily provide type-ahead support for financial symbol searches, and can be used to search for either symbols or for API parameters.

HelloWorlds applications serve as ready-to-go sample applications, built in your preferred language (Java, Python, Javascript, or Ruby) that will help you on your way to making your first Xignite service call.

There will be more! Over the coming months, this shall be the launchpad for many new widgets, whitepapers, and SDKs that exemplify how Xignite’s financial data services can help power your next Great Idea.

Stay tuned!
the Xignite Labs Team