Today, we are making it easier for you to run Hadoop jobs directly against your data in Google BigQuery and Google Cloud Datastore with the Preview release of Google BigQuery connector and Google Cloud Datastore connector for Hadoop. The Google BigQuery and Google Cloud Datastore connectors implement Hadoop’s InputFormat and OutputFormat interfaces for accessing data. These two connectors complement the existing Google Cloud Storage connector for Hadoop, which implements the Hadoop Distributed File System interface for accessing data in Google Cloud Storage.

The connectors can be automatically installed and configured when deploying your Hadoop cluster using bdutil simply by including the extra “env” files:
  • ./bdutil deploy
  • ./bdutil deploy
  • ./bdutil deploy

Diagram of Hadoop on Google Cloud Platform

These three connectors allow you to directly access data stored in Google Cloud Platform’s storage services from Hadoop and other Big Data open source software that use Hadoop's IO abstractions. As a result, your valuable data is available simultaneously to multiple Big Data clusters and other services, without duplications. This should dramatically simplify the operational model for your Big Data processing on Google Cloud Platform.

Here are some word-count MapReduce code samples to get you started:

As always, we would love to hear your feedback and ideas on improving these connectors and making Hadoop run better on Google Cloud Platform.

-Posted by Pratul Dublish, Product Manager

Today, we are announcing the release of App Engine 1.9.3.

This release offers stability and scalability improvements, themes that we will continue to build on with the next few releases. We know that you rely on App Engine for critical applications, and with the significant growth we’ve experienced over the past couple years we wanted to take a step back and spend a few release cycles with a laser focus on the core functionality that impacts your service and end users. As a result, new features and functionality may take a back seat to these improvements. That said, we fully expect to continue making progress with existing services, including Dedicated Memcache.

Dedicated Memcache
Today we are pleased to announce the General Availability of our dedicated memcache service in the European Union. Dedicated Memcache lets you provision additional, isolated memcache capacity for your application. For more details about this service, see our recent announcement.

Our goal is to make sure that App Engine is the best place to grow your application and business rapidly. As always, you can find the latest SDK on our release page along with detailed release notes and can share questions/comments with us at Stack Overflow.

When Applibot needed a flexible computing architecture to help them grow in the competitive mobile gaming market in Japan, they turned to Google Cloud Platform. When Tagtoo, a online content tagging startup, needed to tap into the power of analytics to better serve digital ads to customers in Taiwan, they turned to Google Cloud Platform. In fact, companies all over the world are turning to Cloud Platform to create great apps and build successful businesses.

Now, more developers in Asia Pacific can experience the speed and scale of Google’s infrastructure with the expansion of support for Cloud Platform. Today we switched on support for Compute Engine zones in Asia Pacific, as well as deploying Cloud Storage and Cloud SQL.  

This region comes with our latest Cloud technology, which includes Andromeda - the codename for Google’s network virtualization stack - to provide blazing fast networking performance as well as transparent maintenance with live migration, and automatic restart for Compute Engine.

In addition to local product availability, the Google Cloud Platform website and the developer console will also be available in Japanese and Traditional Chinese. These websites have updated use cases, documentation and all sorts of goodies and tools to help local developers get started with Google Cloud Platform. Developers interested in learning more about Google Cloud Platform can join one of the Google Cloud Platform Global Roadshow events coming up in Tokyo, Taipei, Seoul or Hong Kong.

The launch of Cloud Platform support in Asia Pacific is in line with our increasing investment in the region and our commitment to developers around the world. To all our customers in the region, we would like to say “THANK YOU / 謝謝 / ありがとう ” for your support of Google Cloud Platform.

-Posted by Howard Wu, Head of Asia Pacific Marketing for Google Cloud Platform

Our friends at Google recently published a comprehensive overview of how to manage Google Compute Engine infrastructure via the various automation platforms available. The GCE team invited us to add our perspective on this topic and what follows here is a look at why we love GCE, how our customers are succeeding with Chef+GCE, and technical details on automating GCE resources with Chef.

Chef is betting on Compute Engine
You’ve often heard us reference the ‘coded business’. In short, we propose technology has become the primary touch point for customers. Demand is relentless. And the only way to win the race to market is by automating delivery of IT infrastructure and software.

This macro shift began in part because of Google’s success in leveraging large-scale compute to rapidly deliver goods and services to market. And when we say ‘large-scale’, there aren’t many, if any, businesses with more compute resources, expertise, and experience than Google.

So it makes a ton of sense that Google would pivot their massive compute infrastructure into an ultra-scalable cloud service. Obviously they know what they’re doing and now everyone from startups to enterprises can tap into Google’s compute mastery for themselves.

Working with the Compute Engine team fits perfectly into not only our view of how the IT industry, and business itself, is changing, but also what our customers want. Choice. Speed (lots and lots of speed). Scale. Flexibility. Reliability.

Why customers love using Chef and Google Compute Engine

Cloud-based delivery

Like the Google Cloud Platform, Chef offers customers all the benefits of cloud-based delivery. New users can get instant access to a powerful Enterprise Chef server hosted on the cloud, no credit card is required, and you can manage up to five instances for free.

When you want to use Chef to manage larger numbers of nodes, you add this capability on a simple, pay-as-you-go basis. Customers can get started using Chef to configure GCE in minutes, start to finish. Ian Meyer the Technical Ops Manager at AdMeld (now part of Google) praises the SaaS delivery model of Hosted Chef:

“Prior to deploying Hosted Chef,” said Meyer, “we did everything manually. It generally took me a couple of weeks to get access to the servers I needed and at least a day to add a new developer. With Chef, I can now add a couple of developers within 20 minutes. Additionally, when we set up a new ad serving system with data bags, the set-up time goes from two to three days to an hour. This is simply one of those tools that you need regardless of what your environment is.”

Speed & Scale
Just as customers are choosing GCE for its speed, our customers appreciate how Chef’s execution model pushes the heavy lifting to the Chef client(s) rather than compiling configuration instructions on the server. Chef stands well above the field with a single Chef server handling 10,000 nodes at the default 30-minute update interval.

Our customers tell us that Chef is more flexible than any other offering. When the situation calls for it, Chef allows advanced users to work directly with infrastructure primitives and a full-fledged modern Ruby-based programming language.

Chef customers can tap into the shared knowledge, expertise, and helping hands of tens of thousands of Chef Community members, not to mention over 1000 Chef Cookbooks. The Chef Community provides a vibrant, welcoming resource for learning best practices. In recent years, high profile vendors have contributed and built on top of Chef including Google, Rackspace, Dell, HP, Facebook, VMware, AWS, Rackspace and IBM.

Google will be a featured partner at this year’s ChefConf. Join Google’s Eric Johnson as he shares technical details about Chef’s integration and future roadmap with GCE.

Chef and GCE: Under the Hood
Chef makes it easy to get started with GCE. Once you’ve obtained a GCE account and configured your Chef workstation, you can extend Chef’s knife command-line tool with the knife-google plugin:

gem install knife-google
knife google setup

That last command will walk you through a one-time configuration of your knife workstation with GCE credentials.

Now you can use knife with the cookbooks on your Chef server to deploy infrastructure from Chef recipes to GCE instances. Here’s an example where we use Chef to create a Jenkins master node hosted in GCE:

knife google server create jenkins1 -Z us-central1-a -m n1-highcpu-2 -I debian-7-wheezy-v20131120 -r 'jenkins::master'

This command takes the following actions:

  • Creates a Debian VM instance in GCE’s us-central1-a zone with machine type n1-highcpu-2
  • Registers it as a node named ‘jenkins1’ with the Chef Server
  • Configures the node’s run_list attribute as ‘jenkins::master’
  • Uses the ssh protocol to run chef-client with that ‘master’ recipe from the Jenkins community cookbook on the new system.
At the end of this process, you’ll see a message like the one below:

Chef Client finished, 19/21 resources updated in 40.207903203 seconds

And now you have a Jenkins master. This and similar knife commands may be integrated into automation that can also spin up Jenkins tester systems for a complete continuous integration pipeline backed by GCE.

You can then use Chef Server features like search to manage the pipeline as long as you need it. But since Chef makes deployment so simple, and GCE makes it so fast, you can just destroy part or all of it when it’s no longer needed...
# Commands like this destroy unneeded nodes
knife google server delete tester1 -y --purge

… and recreate nodes ‘just-in-time style’ when demand picks back up again.

The quick turnaround on deployment and convergent configuration updates via Chef + GCE allows teams to experiment with developer automation at very low cost.

To get a deeper sense of how you can exploit the capabilities of GCE, please visit our GCE page outlining details around Chef’s knife-google plugin and explore the community library of coded infrastructure.

-Contributed by Adam Edwards, Platform Engineering at Chef

We love seeing our developers create groundbreaking new applications on top of our infrastructure. To help our current and prospective users gain insight into the vast array of these applications, we recently added a new case study. Whether you’re interested in learning about how businesses are building on our platform or just looking for inspiration for your next project, we hope you find it informative.

Kahuna used App Engine to create an automated mobile-engagement engine that would turn people who downloaded a mobile app into truly engaged customers.

Check out to see the full list of case studies. You can read about companies varying in size, industry, and use cases, who are using Google Cloud Platform to build their products and businesses.

To learn more about Kahuna, please visit

-Posted by Chris Palmisano, Account Manager

Today we are excited to announce a significantly updated Logs Viewer for App Engine users. Logs from all your instances can be viewed together in near real time, with greatly improved filtering, searching and browsing capabilities.

This release includes UI and functional improvements. We’ve added features that simplify navigation and make it easier to find the logs data you’re looking for.
Logs Viewer 4.png
(1) Filter on fields and use regular expressions in a single query
You can now use field filters (e.g. status:, protocol:, etc) and regular expressions together in a single query. This is useful for filtering through events that might occur with a high frequency. In addition, you can add and remove filters ad-hoc to help drill down and then zoom out again until you find what you’re looking for. Simply modify the query and press ‘enter’ to refresh the logs.

When you click the search bar we will show possible completions for filtering fields as you type. For example, typing ‘re’ would produce four possible completions as demonstrated in the screenshot below:
Logs Viewer (1).png
Note that filters of the same type are ORed to get results, while different filter types are ANDed together. So for example, status:400 status:500 regex:quota would produce all requests that returned HTTP status of either 400 OR 500, AND have the word quota in the log.

(2) Search or scroll through all of your logs
When you scroll through your logs in the new Logs Viewer, results are fetched until the console window is full. To retrieve additional logs that match the query, simply scroll down for newer results or up for older ones.

This provides you with a continuous view of your events to enable you to move forward and backward in time without requiring you to click “next” or refresh the console. As related events frequently occur at close proximity to each other, this can help you hone in on root-causes faster. While results are being fetched you will see a Loading… indicator at the top right corner of the viewer.

(3) Get it all in one place
With the Logs Viewer you can view and search logs from all your instances and apply filters to narrow in on a specific event, regardless of where it was generated. While this functionality exists in our old viewer, we are committed to making developers’ lives easier by making it simple to consume and analyze large amounts of data generated by highly scalable applications.

Those of you that have been using the old viewer should note that the same logs are available in both viewers. Additionally, the logs quota remains unchanged.

We’re working hard on additional improvements to make developers more productive and provide you with easier and more insightful access to your data. Stay tuned!

Your feedback is important!
Comments? Suggestions? Rants? Please send them to:

-Posted by Amir Hermelin, Product Manager

Today we are pleased to announce that Red Hat Enterprise Linux has exited Open Preview and is now Generally Available in two consumption models: on demand (pay by the hour), and Red Hat Cloud Access (pay by the year). This gives customers the ability to make use of Red Hat support, relationships, and technology on Google Cloud Platform, while maintaining a consistent level of service and support with consistent and predictable pricing from Red Hat. As an added benefit for subscribers of Red Hat Enterprise products, Red Hat Cloud Access enables qualified enterprise customers to migrate their current subscriptions for use on Google Cloud Platform. This starts with Red Hat Enterprise Linux (RHEL) subscriptions, with other Red Hat products to follow. You can learn more about Red Hat Cloud Access here, and find documentation for RHEL on Compute Engine here. Use of RHEL on Google Compute Engine is subject to additional terms conditions (see the Google Cloud Platform Service Specific Terms here).

-Posted by Martin Buhr, Product Manager