This guest post was written by Daniel Peter, Senior Programmer Analyst at Safari Books Online.

Cross-posted from the Google Cloud Platform Blog

Safari Books Online is a subscription service for individuals and organizations to access a growing library of over 30,000 technology and business books and videos. Our customers browse and search the library from web browsers and mobile devices, generating powerful usage data which we can use to improve our service and increase profitability. We wanted to quickly and easily build dashboards, improve the effectiveness of our sales teams and enable ad-hoc queries to answer specific business questions. With billions of records, we found it challenging to get the answers to our questions fast enough with our existing MySQL databases.

Looking for alternative solutions to build our dashboards and enable interactive ad-hoc querying, we played with several technologies, including Hadoop. In the end, we decided to use Google BigQuery.

Here’s how we pipe data into BigQuery:

Our data starts in our CDN and server logs, gets packaged up into compressed files, and runs through our ETL server before finishing in BigQuery.

Here’s one of the dashboards we built using the data:

You can see that with the help of BigQuery, we can easily categorize our books. This dashboard shows popular books by desktop and mobile, and with BigQuery, we are able to run quick queries to dive into other usage patterns as well.

BigQuery has been very valuable for our company, and we’re just scratching the surface of what is possible.

Check out the article for more details on how we manage our import jobs, transform our data, build our dashboards, detect abuse and improve our sales team's effectiveness.

Posted by Scott Knaster, Editor

Author Photo
By Tim Bray, Google Identity Team

As part of our continuous effort to increase Internet security for Google and for our users, we are in the process of migrating from 1024-bit to 2048-bit certificates. We will also be changing our certificate chain.

This roll-out has already started and will be completed in the next few months.

We asked some of our experts if they could think of scenarios where client software might have trouble with this change, and came up with a couple. The first is people who are using a very old home-compiled version of OpenSSL with an out-of-date CA database. Then there are instances of embedded-client software with (against the best advice of all the experts) hard-coded certificate logic, perhaps for reasons of saving space.

Having said that, most client software should work just fine. Feel free to visit our Frequently Asked Questions page for more info and, to be sure, test your clients against

Tim says: By day, I help in the struggle against passwords on the Internet.
The rest of my life is fully documented on my blog.

Posted by Scott Knaster, Editor

Author PhotoBy Piotr Stanczyk, Tech Lead, Google Calendar APIs

If you've developed an application integrated with Google Calendar, you know that you need to periodically poll for event changes to ensure your application stays in sync. Today we’re launching Push notifications for Calendar API. This change makes periodic polling unnecessary. Push notifications significantly reduce the amount of time that the app is out of sync with the server. For mobile devices, this can lead to big savings in data usage and power consumption.

The only thing an app needs to do to get the new push functionality is to subscribe to a calendar of interest. When a calendar changes, we’ll notify your app and the app does an API call to get the update. If you use one of the Google API client libraries it’s very easy to utilize push notifications.

As an example, let’s assume you have a calendar Your app is hosted on a server with domain and push notifications should be delivered to an HTTPS web-hook
Map<String, String> params = new HashMap<String, String>();
Channel request = new Channel()
  .setParams(params);"", request).execute();
From now on, every time changes, Google Calendar server will trigger a web-hook callback at All the app needs to do is request an incremental sync as it did before:
changes ="")
If you are interested in using this new feature, please refer to Google API v3 documentation for Push.

Piotr Stanczyk is a Tech Lead for Google Calendar APIs. His current focus is to provide the next generation Calendar API which makes lives of developers easier.

Posted by Scott Knaster, Editor

Author Photo
By John Affaki, Engineering Manager

Today we announced Chromecast, a small $35 device that plugs into any HDTV in your house to bring online video and entertainment to your TV. Chromecast users can cast content from dedicated apps like Netflix or from the web using Chrome, but we'd like to see more multi-screen experiences brought directly to the TV. So today, we've also released the developer preview of the Google Cast SDK, a technology that brings mobile and web content to the TV screen.

You can use the Google Cast SDK to enable your mobile and web apps to cast content to the TV. Chromecast then streams that content directly from the web. Integrating Google Cast technology into your app makes it easier for your users to enjoy their favorite videos, playlists, movies and more on the big screen, without requiring them to download a new application or learn a new interaction model.

The developer preview includes the SDKs for Android, iOS and Chrome, which enable you to extend your application to the TV. We're excited to see what you build with Google Cast, and we're looking forward to your feedback as we continue to refine Google Cast. Get the Google Cast preview SDKs, documentation and samples at Google Developers.

John Affaki manages the client software and services teams for Google Cast. He never grew out of childhood and spends his free time playing video games and reading comic books, but is glad to have some real kids now.

Posted by Scott Knaster, Editor

Author Photo
By +Scott Knaster, Google Developers Blog Editor

Sometimes you have a baby and a startup at the same time. For times like those, Google’s Campus Tel Aviv recently held Campus for Moms, a startup course designed to be friendly to babies and moms taking care of them. The nine-week course included technical topics like the evolution of cloud computing, legal information, financial advice, and more. The classroom was filled with bean bags and mats so that moms and babies could hang out together during sessions.

Graduates finished up by presenting their ideas to potential investors and class leaders. And during the course of the nine-week class session, four participants announced new launches: their babies were born.

Human babies build memories as they learn and grow, but what about other primates? A recent study provides evidence that chimps and orangutans can remember things for longer and more precisely than previously thought. Researchers found that both species could instantly recall an event (finding a particular tool) that took place three years earlier. The animals could also distinguish events that took place two weeks prior. Could ape startups be next?

Finally – and we really do mean finally – the pitch drop experiment at Trinity College Dublin has come to a successful conclusion. After nearly 70 years of trying, the project now has video proof that tar pitch sometimes behaves like a liquid and, given enough time and the right conditions, will form drops that fall from the main body. The experiment started in 1944, and now it’s done. Hooray!

This weekend, why not start a multi-decade experiment of your own? Maybe we can even feature it on some future edition of Fridaygram, if it still exists when your own pitch-drop moment happens.

Author Photo
By Scott Knaster, Google Developers Blog Editor

Maker Camp is an online summer camp created for teens who want to make gadgets, software and other cool stuff. Anyone can join Maker Camp, and it’s free. The folks at Maker Camp post a new project each weekday morning, and each afternoon there’s a Hangout with experts who show how they make things. Plus, there are Hangout “field trips” to places like NASA Ames Research Center.

The makers of Maker Camp say they want to embrace not only the spirit of DIY (do it yourself), but also DIT (do it together), and their online community and Hangouts are designed to do just that. Maker Camp started earlier this week and runs for 6 weeks. You can participate any day by visiting the Maker Camp page on Google+ and joining the Maker Camp Google+ Community.

Speaking of great projects, researchers have long been trying to create a mirror that reflects all the light that reaches it without absorbing any – a so-called perfect mirror. Recently, scientists at MIT were studying photonic crystals when they discovered a way to get the crystal to reflect all the light from a specific frequency of red light shined on it. This discovery could lead to more efficient lasers and yet-unknown advances.

Finally, we turn (as we often do on Fridaygram) to space, where amazing things happen, but so do mundane things, like astronauts having to wash their hair. This can be especially problematic if you have long hair and you’re aboard the International Space Station, like Astronaut Karen Nyberg. In this video, Astronaut Nyberg shows how she washes her long hair in zero gravity, where you have to keep track of the water lest it float away from you.

Here’s an idea: future astronauts could look at themselves in perfect mirrors when they wash their hair. That’s just the kind of out-of-the-box thinking you’ll get here on Fridaygram, where we try to provide a respite from your week of coding by offering some fun nerdy stuff.

By Srinivasan Kannan, Google Analytics API Team

Cross-posted from the Google Analytics Blog

Over the past year we’ve added many new features to Google Analytics. Today we are releasing all of this data in the Core Reporting API.

Custom Dimensions and Metrics

We're most excited about the ability to query for custom dimensions and metrics using the API.

Developers can use custom dimensions to send unique IDs into Google Analytics, and then use the core reporting API to retrieve these IDs along with other Google Analytics data.

For example, your content management system can pass a content ID as a custom dimension using the Google Analytics tracking code. Developers can then use the API to get a list of the most popular content by ID and display the list of most popular content on their website.

Mobile Dimensions and Metrics

We've added more mobile dimensions and metrics, including those found in the Mobile App Analytics reports:

  • ga:appId
  • ga:appVersion
  • ga:appName
  • ga:appInstallerId
  • ga:landingScreenName
  • ga:screenDepth
  • ga:screenName
  • ga:exitScreenName
  • ga:timeOnScreen
  • ga:avgScreenviewDuration
  • ga:deviceCategory
  • ga:isTablet
  • ga:mobileDeviceMarketingName
  • ga:exceptionDescription
  • ga:exceptionsPerScreenview
  • ga:fatalExceptionsPerScreenview

Some examples of questions this new data can answer are:

Local Currency Metrics

If you are sending Google Analytics multiple currencies, you now have the ability to access the local currency of the transactions with this new data:

  • ga:currencyCode
  • ga:localItemRevenue
  • ga:localTransactionRevenue
  • ga:localTransactionShipping
  • ga:localTransactionTax

Time Dimensions

We added new time-based dimensions to simplify working with reporting data:

  • ga:dayOfWeekName
  • ga:dateHour
  • ga:isoWeek
  • ga:yearMonth
  • ga:yearWeek

Sample queries:

Traffic Source Dimensions

Finally, we've added two new traffic source dimensions, including one to return the full URL of the referral.

  • ga:fullReferrer
  • ga:sourceMedium

Sample query: the top 10 referrers based on visits (using full referrer).

For a complete list of the new data, take a look at the Core Reporting API changelog.
For all the data definitions, check the Core Reporting API Dimensions and Metrics explorer.
As always, you can check out this new data directly within our Query Explorer tool.

Writen by Srinivasan Kannan, Google Analytics API Team

Posted by Scott Knaster, Editor

Author Photo
By Michael Kleber, Ads Latency Team

As part of our ongoing quest to make the Web faster, we're happy to share a new tag that publishers can use to request AdSense ads. This new tag is built around the modern <script async> mechanism, ensuring that even if a slow network connection makes our script slow to load, the surrounding web page can fully render. A few other benefits come along for the ride, like the ability for ad slot configuration and placement to live entirely in the DOM.

A decade ago, when AdSense was born, asking publishers to copy-and-paste a bit of HTML including <script src=".../show_ads.js"> was the natural way to get our ads to appear in that spot on a web page. But as web speed evangelist Steve Souders has explained, that kind of blocking script tag is a "Frontend Single Point of Failure": if a server or network problem makes that script unavailable, it could prevent the whole rest of the page from appearing. That's why modern best practices emphasize loading most resources asynchronously. These guidelines are even more important for sites loaded on mobile devices, where dropped HTTP connections are far too common.

Anatomy of the new tag:
  1. A script, which only needs to appear once on your page, even if you have multiple ads. It is loaded asynchronously, so it is safe and most efficient to put it near the top.

    <script async src=""></script>
  2. A particular DOM element, inside of which the ad will appear. We use attributes of this element to configure the properties of the ad slot.
  3. A call to adsbygoogle.push(), which instructs us to fill in the first unfilled slot.
    <ins class="adsbygoogle"
      (adsbygoogle = window.adsbygoogle || []).push({});
The ad block size is based on the width and height of the <ins>, which can be set inline, as shown here, or via CSS. Other configuration happens through its data-* properties, which take the place of the old (and manifestly not async-friendly!) google_* window-level variables. If your Google Analytics integration required setting google_analytics_uacct="UA-zzzzzz-zz" before, then you should now add the property data-analytics-uacct="UA-zzzzzz-zz" to the <ins> instead, for example.

The classic show_ads.js tag is a battle-tested warrior with years of experience, running on millions of sites around the web. The upstart adsbygoogle.js is in beta and may not be as robust yet. Give it a try and please let us know if you run into any problems. But if you were jumping through hoops before to keep ads from blocking your content, try this out straight away. We think you'll approve.

Michael Kleber is the Tech Lead of the Ads Latency Team at the Google office in Cambridge, MA. Before coming to Google to make the Internet faster, he spent time as a math professor working on representation theory and combinatorics, as a computational biologist working on genome assembly, and on machine learning for speech recognition.

Posted by Scott Knaster, Editor

Author Photo
By Peter Dickman, Engineering Manager

Google has supported the PubSubHubbbub (PuSH) protocol since its introduction in 2009. Earlier this year we completely rewrote our PuSH hub implementation, both to make it more resilient and to considerably enhance its capacity and throughput. Our improved PuSH hub means we can expose feeds more efficiently, coherently and consistently, from a robust secure access point. Using the PuSH protocol, servers can subscribe to an almost arbitrarily large number of feeds and receive updates as they occur.

In contrast, the Feed API allows you to download any specific public Atom or RSS feed using only JavaScript, enabling easy mashups of feeds with your own content and other APIs. We are planning some improvements to the Feed API, as part of our ongoing infrastructure work.

We encourage you to consider PuSH as a means of accessing feeds in bulk. To support that, we’re clarifying our practices around bots interacting with Google’s PuSH system: we encourage providers of feed systems and related tools to connect their automated systems for feed acquisition to our PuSH hub (or other hubs in the PuSH ecosystem). The PuSH hub is designed to be accessed by bots and it’s tuned for large-scale reading from the PuSH endpoints. We have safeguards against abuse, but legitimate users of the access points should see generous limits, with few restrictions, speed bumps or barriers. Similarly, we encourage publishers to submit their feeds to a public PuSH hub, if they don’t want to implement their own.

Google directly hosts many feed producers (e.g. Blogger is one of the largest feed sources on the web) and is a feed consumer too (e.g. many webmasters use feeds to tell our Search system about changes on their sites). Our PuSH hub offers easy access to hundreds of millions of Google-hosted feeds, as well as hundreds of millions of other feeds available via the PuSH ecosystem and through active polling.

The announcement of v0.4 of the PuSH specification advances our goal of strengthening infrastructure support for feed handling. We’ve worked with Superfeedr and others on the new specification and look forward to it being widely adopted.

Peter Dickman spends his days herding cats for the Search Infrastructure group in Zurich. He divides his spare time between helping government bodies understand cloud computing and systematically evaluating the products of Switzerland’s chocolatiers.

Posted by Scott Knaster, Editor