Posted:
By Emma Turpin, DevArt Lead at Google Creative Lab and Paul Kinlan, Developer Advocate

Mapping a dream as it navigates through your brain using G+APIs. Exploring metamorphosis through storytelling in the form of a poetic adventure with Chrome Apps and Compute API. Travelling through a playful giant map that explores fantasy and reality on a huge scale using Map API. Creating music through the touch of your finger on a simple piece of wood using Android.

These are just a sample of the hundreds of projects we received after inviting the developer community to express themselves creatively as part of DevArt. We were looking for a unique idea which mixes art and code and pushes the boundaries, to be featured in the Barbican's Digital Revolution exhibition, opening this summer in London and from there touring the rest of the world.

And the winner is … a duo Cyril Diagne & Béatrice Lartigue from France. Cyril and Beatrice’s project, Les métamorphoses de Mr. Kalia, is an interactive poetic adventure around the theme of metamorphosis in the human body. It invites gallery visitors to personify Mr. Kalia as he goes through many surrealistic changes [video] [project page on DevArt site]. The piece conveys feelings related to change, evolution and adaptation. Mr. Kalia is brought to life through the use of a skeleton tracking technology, and uses Chrome apps and Google Compute Engine.


Cyril and Béatrice’s installation will sit alongside three of the world’s finest interactive artists who are also creating installations for DevArt: Karsten Schmidt, Zach Lieberman, and the duo Varvara Guljajeva and Mar Canet. The Digital Revolution Exhibition will be opening in London on 3 July with tickets available online here.

We were overwhelmed by all of the amazing ideas we saw, a testament to the creativity that’s possible with code. Watch this space - DevArt at the Digital Revolution exhibition at the Barbican opens in July!

Paul Kinlan is a Developer Advocate in the UK on the Chrome team specialising on mobile. He lives in Liverpool and loves trying to progress the city's tech community from places like DoES Liverpool hack-space.


Posted by Louis Gray, Googler

Posted:
By Nadya Direkova, Staff Designer and Design Evangelist

When Google launched, it was a crisp white page with a simple search box. You might not have thought there was much in the way of design, but its appearance underscored two of our most important principles: simplicity and usefulness. Those principles haven't changed much in fifteen years, but our understanding of what makes great design has—throughout the industry. Today, there’s design in everything we touch. And as a developer, even if you don't happen to be a formal designer, you've undoubtedly faced design challenges as you've built your own products. Design has always been a rich conversation, and it's one that we’d like to have with you as you work on your projects and as we work on ours.

At Google I/O this year, we will have sessions and workshops focused on design, geared for designers and developers who are interested in design. We're looking forward to exchanging ideas with you both at the conference and online afterwards. Remember, registration is open until Friday and details on Google I/O Extended events are coming soon.

To start off the conversation, today we're kicking off a series called "Google Design Minutes" with three videos where we share some key learnings with you-from Glass, Maps and Search. We hope they'll help you navigate some of your own design challenges.


Search — The beauty of speed: Jon Wiley discusses the importance of designing for simplicity and speed.


Maps - Putting the user front and center: Sian Townsend looks at the importance of understanding how a user approaches your product, while Jonah Jones talks about adapting the approach to make the map the user interface.


Glass - Make it simple: Isabelle Olsson talks about the focus her team put into making Glass simple, and how this choice guided all the decisions they made.

We'll be continuing this conversation around design over the coming months—including more of these videos, as well as design-focused content on our developer site. And we'd love to hear from you! We've set up a Google Moderator page to hear your questions for our designers and researchers; in the coming weeks, we'll share our thoughts on the top questions. We look forward to seeing you at Google I/O, and hearing your own thoughts on what makes great design today!

Nadya Direkova is a Staff Designer at Google [X]. She runs design sprints across Google allowing interdisciplinary teams to design with skill and speed.

Posted by Louis Gray, Googler

Posted:
Cross-posted from the Geo Developers Blog

By Mike Jeffrey, Google Maps API Team

Are you an iOS developer interested in adding a map to your application? The instructional experts at Code School set out to create a course introducing the Google Maps SDK for iOS to developers like you — and they delivered!

Exploring Google Maps for iOS is a free course covering everything from adding a simple map, to using geocoding and directions, to incorporating Street View in iOS. You'll end up with a working sample application and gain the knowledge you need to build your own amazing Google Maps-based apps. Learn from videos, sample code, and Xcode-based coding challenges.

Check out the introduction video below, and then head over to Code School to get started with their interactive course!

You can also read our official developer documentation and reference docs at https://developers.google.com/maps/documentation/ios/.

Posted by Louis Gray, Googler

Posted:
By Billy Rutledge, Director of Developer Relations

Whether you're a developer who's interested in the latest developments in the Googleverse or someone who wants to build the next billion dollar app, Google I/O is the ticket for you. It’s your opportunity to speak directly with us about what’s going on and where apps are headed for 2014. At past I/Os, we skydived onto the Moscone Center--broadcasting live using Google Glass and Google+, launched hardware--like the original Nexus 7 and the first commercial Chromebooks, and showed you how you can use Google services to take your apps to the next level.

While we aren't ready to share what’s up our sleeves just yet, we can share with you that we’re focusing on three key themes this year: design, develop and distribute--helping you build and prove your app from start to finish. With those themes in mind and the registration window opening today (link), here’s what we’ve got planned for Google I/O 2014:

  • More time to talk code, wireframes, and business plans with real humans: If you're coming in person, the schedule will give you more time to interact in the Sandbox, where partners will be on hand to demo apps built on the best of Google and open source, and where you can interact with Googlers 1:1 and in small groups. Come armed with your app and get ready for direct feedback on your app design, code, and distribution plan.
  • Go deeper with content: A streamlined session schedule will be published in May featuring talks that will inspire ideas for your next app, while giving you the tools to build it. We'll also be providing self-paced Code Labs that you can dive into while at I/O.
  • A taste of San Francisco, with After Hours: All work and no play makes for a dull conference! After Hours will showcase some of what our vibrant city has to offer, including craft brews in our beer garden, our city's illustrious food trucks, and local indie bands, so you can tear it up on the lawn (yes, that's right: outside this year!)

We’re really looking forward to this year’s event and hope you are too. Don't forget that the registration window will remain open from 4pm PDT today (April 15) until April 18 at 2:00 PM PDT. Applicants will be selected at random after the window closes, and we’ll let you know your status on or around April 21.

We look forward to seeing you in June, whether you’re joining us at Moscone, at an I/O Extended event, or online at I/O Live.

Billy Rutledge, Director of Developer Relations

Posted by Louis Gray, Googler

Posted:
Author Photo By Nick Mihailovksi, Product Manager, Google Analytics API Team

Cross-posted from the Google Analytics Blog

Segmentation is one of the most powerful analysis techniques in Google Analytics. It's core to understanding your users, and allows you to make better marketing decisions. Using segmentation, you can uncover new insights such as:
  • How loyalty impacts content consumption
  • How search terms vary by region
  • How conversion rates differ across demographics
Last year, we announced a new version of segments that included a number of new features.

Today, we've added this powerful functionality to the Google Analytics Core Reporting API. Here's an overview of the new capabilities we added:

User Segmentation
Previously, advanced segments were solely based on sessions. With the new functionality in the API, you can now define user-based segments to answer questions like "How many users had more than $1,000 in revenue across all transactions in the date range?"

Example: &segment=users::condition::ga:transactionRevenue>1000

Try it in the Query Explorer.

Sequence-based Segments
Sequence-based segments provide an easy way to segment users based on a series of interactions. With the API, you can now define segments to answer questions like "How many users started at page 1, then later, in a different session, made a transaction?"

Example: segment=users::sequence::ga:pagePath==/shop/search;->>perHit::ga:transactionRevenue>10

Try it in the Query Explorer.

New Operators
To simplify building segments, we added a bunch of new operators to simplify filtering on dimensions whose values are numbers, and limiting metric values within ranges. Additionally, we updated segment definitions in the Management API segments collection.

Partner Solutions
Padicode, one of our Google Analytics Technology Partners, used the new sequence-based segments API feature in their funnel analysis product they call PadiTrack.

PadiTrack allows Google Analytics customers to create ad-hoc funnels to identify user flow bottlenecks. By fixing these bottlenecks, customers can improve performance, and increase overall conversion rate.

The tool is easy to use and allows customers to define an ad-hoc sequence of steps. The tool uses the Google Analytics API to report how many users completed, or abandoned, each step.

paditrack-horizontal-funnel.jpg

Funnel Analysis Report in PadiTrack

According to Claudiu Murariu, founder of Padicode, "For us, the new API has opened the gates for advanced reporting outside the Google Analytics interface. The ability to be able to do a quick query and find out how many people added a product to the shopping cart and at a later time purchased the products, allows managers, analysts and marketers to easily understand completion and abandonment rates. Now, analysis is about people and not abstract terms such as visits."

The PadiTrack conversion funnel analysis tool is free to use. Learn more about PadiTrack on their website.

Resources
We're looking forward to seeing what people build using this powerful new functionality.

Nick is the Lead Product Manager for Core Google Analytics, including the Google Analytics APIs. Nick loves and eats data for lunch, and in his spare time he likes to travel around the world.

Posted by Louis Gray, Googler

Posted:
Cross-posted from the Geo Developers Blog

By Mark McDonald, Google GeoDevelopers Team

We recently announced the launch of the data layer in the Google Maps JavaScript API, including support for GeoJSON and declarative styling.  Today we’d like to share a technical overview explaining how you can create great looking data visualizations using Google Maps.

Here’s our end goal. Click through to interact with the live version.
Data provided by the Census Bureau Data API but is not endorsed or certified by the Census Bureau.

Posted:
Cross-posted from the Chromium Blog

By Anders Johnsen, Google Chrome Team

Today's release of the Dart SDK version 1.3 includes a 2x performance improvement for asynchronous Dart code combined with server-side I/O operations. This puts Dart in the same league as popular server-side runtimes and allows you to build high-performance server-side Dart VM apps.

We measured request-per-second improvements using three simple HTTP benchmarks: Hello, File, and JSON. Hello, which improved by 130%, provides a measure for how many basic connections an HTTP server can handle, by simply measuring an HTTP server responding with a fixed string. The File benchmark, which simulates the server accessing and serving static content, improved by nearly 30%. Finally, as a proxy for performance of REST apps, the JSON benchmark nearly doubled in throughput. In addition to great performance, another benefit of using Dart on the server is that it allows you to use the same language and libraries on both the client and server, reducing mental context switches and improving code reuse.

The data for the chart above was collected on a Ubuntu 12.04.4 LTS machine with 8GB RAM and a Intel(R) Core(TM) i5-2400 CPU, running a single-isolate server on Dart VM version 1.1.3, 1.2.0 and 1.3.0-dev.7.5.
The source for the benchmarks is available.

We are excited about these initial results, and we anticipate continued improvements for server-side Dart VM apps. If you're interested in learning how to build a web server with Dart, check out the new Write HTTP Clients and Servers tutorial and explore the programmer's guide to command-line apps with Dart. We hope to see what you build in our Dartisans G+ community.

Anders Johnsen is a software engineer on the Chrome team, working in the Aarhus, Denmark office. He helps Dart run in the cloud.

Posted by Louis Gray, Googler

Posted:
Author PhotoBy Felipe Hoffa, Cloud Platform team

Cross-posted from the Google Cloud Platform Blog
Editor's note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

Last Tuesday we announced an exciting set of changes to Google BigQuery making your experience easier, faster and more powerful. In addition to new features and improvements like table wildcard functions, views, and parallel exports, BigQuery now features increased streaming capacity, lower pricing, and more.


1000x increase in streaming capacity

Last September we announced the ability to stream data into BigQuery for instant analysis, with an ingestion limit of 100 rows per second. While developers have enjoyed and exploited this capability, they've asked for more capacity. You now can stream up to 100,000 rows per second, per table into BigQuery - 1,000 times more than before.

For a great demonstration of the power of streaming data into BigQuery, check out the live demo from the keynote at Cloud Platform Live.

Users often partition their big tables into smaller units for data lifecycle and optimization purposes. For example, instead of having yearly tables, they could be split into monthly or even daily sets. BigQuery now offers table wildcard functions to help easily query tables that match common parameters.

The downside of partitioning tables is writing queries that need to access multiple tables. This would be easier if there was a way to tell BigQuery "process all the tables between March 3rd and March 25th" or "read every table which names start with an 'a'". You can do this with this release.

TABLE_DATE_RANGE() queries all tables that overlap with a time range (based on the table names), while TABLE_QUERY() accepts regular expressions to select the tables to analyze.

For more information, see the documentation and syntax for table wildcard functions.

Improved SQL support and table views

BigQuery has adopted SQL as its query language because it's one of the most well known, simple and powerful ways to analyze data. Nevertheless BigQuery used to impose some restrictions on traditional SQL-92, like having to write multiple sub-queries instead of simpler multi-joins. Not anymore, now BigQuery supports multi-join and CROSS JOIN, and improves its SQL capabilities with more flexible alias support, fewer ORDER BY restrictions, more window functions, smarter PARTITION BY, and more.

A notable new feature is the ability to save queries as views, and use them as building blocks for more complex queries. To define a view, you can use the browser tool to save a query, the API, or the newest version of the BigQuery command-line tool (by downloading the Google Cloud SDK).

User-defined metadata

Now you can annotate each dataset, table, and field with descriptions that are displayed within BigQuery. This way people you share your datasets with will have an easier time identifying them.

JSON parsing functions

BigQuery is optimized for structured data: before loading data into BigQuery, you should first define a table with the right columns. This is not always easy, as JSON schemas might be flexible and in constant flux. BigQuery now lets you store JSON encoded objects into string fields, and you can use the JSON_EXTRACT and JSON_EXTRACT_SCALAR functions to easily parse them later using JSONPath-like expressions.

For example:
SELECT json_extract_scalar(
   "{'book': { 
       'category':'fiction', 
       'title':'Harry Potter'}}", 
   "$.book.category");


Fast parallel exports

BigQuery is a great place to store all your data and have it ready for instant analysis using SQL queries. But sometimes SQL is not enough, and you might want to analyze your data with external tools. That's why we developed the new fast parallel exports: With this feature, you can define how many workers will be consuming the data, and BigQuery exports the data to multiple files optimized for the available number of workers.

Check the exporting data documentation, or stay tuned for the upcoming Hadoop connector to BigQuery documentation.

Massive price reductions

At Cloud Platform live, we announced a massive price reduction: Storage costs are going down 68%, from 8 cents per gigabyte per month to only 2.6, while querying costs are going down 85%, from 3.5 cents per gigabyte to only 0.5. Previously announced streaming costs are now reduced by 90%. And finally, we announced the ability to purchase reserved processing capacity, for even cheaper prices and the ability to precisely predict costs. And you always have the option to burst using on-demand capacity.

I want to take this space to celebrate the latest open source community contributions to the BigQuery ecosystem. R has its own connector to BigQuery (and a tutorial), as Python pandas too (check out the video we made with Pearson). Ruby developers are now able to use BigQuery with an ActiveRecord connector, and send all their logs with fluentd. Thanks all, and keep surprising us!

Felipe Hoffa is part of the Cloud Platform Team. He'd love to see the world's data accessible for everyone in BigQuery.

Posted by Louis Gray, Googler

Posted:
By Igor Clark, Google Creative Lab

With the release of the Google Cast SDK, making interactive experiences for the TV is now as easy as making interactive stuff for the web.

Google Creative Lab and Hook Studios took the SDK for a spin to make Photowall for Chromecast: a new Chrome Experiment that lets people collaborate with images on the TV.

Anyone with a Chromecast can set up a Photowall on their TV and have friends start adding photos to it from their phones and tablets in real time.

So how does it work?

The wall-hosting apps communicate with the Chromecast using the Google Cast SDK’s sender and receiver APIs. A simple call to the requestSession method using the Chrome API or launchApplication on the iOS/Android APIs is all it takes to get started. From there, communication with the Chromecast is helped along using the Receiver API’s getCastMessageBus method and a sendMessage call from the Chrome, iOS or Android APIs.

Using the Google Cast SDK makes it easy to launch a session on a Chromecast device. While a host is creating their new Photowall, they simply select which Chromecast they would like to use for displaying the photos. After a few simple steps, a unique five-digit code is generated that allows guests to connect to the wall from their mobile devices.

The Chromecast device then loads the Photowall application and begins waiting for setup to complete. Once ready, the Chromecast displays the newly-generated wall code and waits for photos to start rolling in. If at any point the Chromecast loses power or internet connection, the device can be relaunched with an existing Photowall right from the administration dashboard.

Tying it all together: The mesh

A mesh network connects the Photowall’s host, the photo-sharing guests, and the Chromecast. The devices communicate with each other via websockets managed by a Google Compute Engine-powered node.js server application. A Google App Engine app coordinates wall creation, authentication and photo storage on the server side, using the App Engine Datastore.

After a unique code has been generated during the Photowall creation process, the App Engine app looks for a Compute Engine instance to use for websocket communication. The instance is then told to route websocket traffic flagged with the new wall’s unique code to all devices that are members of the Photowall with that code.

The instance’s address and the wall code are returned to the AppEngine app. When a guest enters the wall code into the photo-sharing app on their browser, the AppEngine app returns the address of the Compute Engine websocket server associated with that code. The app then connects to that server and joins the appropriate websocket/mesh network, allowing for two-way communication between the host and guests.

Why is this necessary? If a guest uploads a photo that the host decides to delete for whatever reason, the guest needs to be notified immediately so that they don’t try to take further action on it themselves.

A workaround for websockets

Using websockets this way proved to be challenging on iOS devices. When a device is locked or goes to sleep the websocket connection should be terminated. However, in iOS it seems that the Javascript execution can be halted before the websocket close event is fired. This means that we are unaware of the disconnection, and when the phone is unlocked again we are left unaware that the connection has been dropped.

To get around this inconsistent websocket disconnection issue, we implemented a check approximately every 5 seconds to examine the ready state of the socket. If it has disconnected we reconnect and continue monitoring. Messages are buffered in the event of a disconnection and sent in order when a connection is reestablished.

Custom photo editing

The heart of the Photowall mobile web application is photo uploading. We created a custom photo editing experience for guests wanting to add their photos to a Photowall. They can upload photos directly from their device’s camera or choose one directly from its gallery. Then comes the fun stuff: cropping, doodling and captioning.

Photowall for Chromecast has been a fun opportunity to throw out everything we know about what a photo slideshow could be. And it’s just one example of what the Chromecast is capable of beyond content streaming. We barely scratched the surface of what the Google Cast SDK can do. We’re excited to see what’s next for Chromecast apps, and to build another.

For more on what’s under the hood of Photowall for Chromecast, you can tune in to our Google Developers Live event for an in-depth discussion on Thursday, April 3rd, 2014 at 2pm PDT.

Igor Clark is Creative Tech Lead at Google Creative Lab. The Creative Lab is a small team of makers and thinkers whose mission is to remind the world what it is they love about Google.

Posted by Louis Gray, Googler

Posted:
By Navneet Joneja, Cloud Platform Team

Cross-posted from the Google Cloud Platform blog

Editor’s note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

For many developers, building a cloud-native application begins with a fundamental decision. Are you going to build it on Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Will you build large pieces of plumbing yourself so that you have complete flexibility and control, or will you cede control over the environment to get high productivity?

You shouldn’t have to choose between the openness, flexibility and control of IaaS, and the productivity and auto-management of PaaS. Describing solutions declaratively and taking advantage of intelligent management systems that understand and manage deployments leads to higher availability and quality of service. This frees engineers up to focus on writing code and significantly reduces the need to carry a pager.

Today, we’re introducing Managed Virtual Machines and Deployment Manager. These are our first steps towards enabling developers to have the best of both worlds.
Managed Virtual Machines

At Google Cloud Platform Live we took the first step towards ending the PaaS/IaaS dichotomy by introducing Managed Virtual Machines. With Managed VMs, you can build your application (or components of it) using virtual machines running in Google Compute Engine while benefiting from the auto-management and services that Google App Engine provides. This allows you to easily use technology that isn’t built into one of our managed runtimes, whether that is a different programming language, native code, or direct access to the file system or network stack. Further, if you find you need to ssh into a VM in order to debug a particularly thorny issue, it’s easy to “break glass” and do just that.

Moving from an App Engine runtime to a managed VM can be as easy as adding one line to your app.yaml file:

vm:true

At Cloud Platform Live, we also demonstrated how the next stage in the evolution of Managed VMs will allow you to bring your own runtime to App Engine, so you won’t be limited to the runtimes we support out of the box.

Managed Virtual machines will soon launch in Limited Preview, and you can request access here starting today.

Introducing Google Cloud Deployment Manager

A key part of deploying software at scale is ensuring that configuration happens automatically from a single source of truth. This is because accumulated manual configuration often results in “snowflakes” - components that are unique and almost impossible to replicate - which in turn makes services harder to maintain, scale and troubleshoot.

These best practices are baked into the App Engine and Managed VM toolchains. Now, we’d like to make it easy for developers who are using unmanaged VMs to also take advantage of declarative configuration and foundational management capabilities like health-checking and auto-scaling. So, we’re launching Google Cloud Deployment Manager - a new service that allows you to create declarative deployments of Cloud Platform resources that can then be created, actively health monitored, and auto-scaled as needed.

Deployment Manager gives you a simple YAML syntax to create parameterizable templates that describe your Cloud Platform projects, including:
  • The attributes of any Compute Engine virtual machines (e.g. instance type, network settings, persistent disk, VM metadata).
  • Health checks and auto-scaling
  • Startup scripts that can be used to launch applications or other configuration management software (like Puppet, Chef or SaltStack)
Templates can be re-used across multiple deployments. Over time, we expect to extend Deployment Manager to cover additional Cloud Platform resources.

Deployment manager enables you to think in terms of logical infrastructure, where you describe your service declaratively and let Google’s management systems deploy and manage their health on your behalf. Please see the Deployment Manager documentation to learn more and to sign up for the Limited Preview.

We believe that combining flexibility and openness with the ease and productivity of auto-management and a simple tool-chain is the foundation of the next-generation cloud. Managed VMs and Deployment Manager are the first steps we’re taking towards delivering that vision.

Update on operating systems

We introduced support for SuSE and Red Hat Enterprise Linux on Compute Engine in December. Today, SuSE is Generally Available, and we announced Open Preview for Red Hat Enterprise Linux last week. We’re also announcing the limited preview for Windows Server 2008 R2, and you can sign up for access now. Windows Server will be offered at $0.02 per hour for the f1-micro and g1-small instances and $0.04 per core per hour for all other instances (Windows prices are in addition to normal VM charges).

Simple, lower prices

As we mentioned on Tuesday, we think pricing should be simpler and more closely track cost reductions as a result of Moore’s law. So we’re making several changes to our pricing effective April 1, 2014.

First, we’ve cut virtual machine prices by up to 53%:
  • We’ve dropped prices by 32% across the board.
  • We’re introducing sustained-use discounts, which lower your effective price as your usage goes up. Discounts start when you use a VM for over 25% of the month and increase with usage. When you use a VM for an entire month, this amounts to an additional 30% discount.
What’s more, you don’t need to sign up for anything, make any financial commitments or pay any upfront fees. We automatically give you the best price for every VM you run, and you still only pay for the minutes that you use.

Here’s what that looks like for our standard 1-core (n1-standard-1) instance: Finally, we’ve drastically simplified pricing for App Engine, and lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 30%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.

We hope you find these new capabilities useful, and look forward to hearing from you! If you haven’t yet done so, you can sign up for Google Cloud Platform here.

Navneet Joneja, Senior Product Manager

Posted by Louis Gray, Googler