As businesses become increasingly data-centric, and with the coming age of the Internet of Things (IoT), enterprises and data-driven organizations must become adept at efficiently deriving insights from their data. In this environment, any time spent building and managing infrastructure rather than working on applications is a lost opportunity. That’s why today we are excited to introduce Google Cloud Bigtable - a fully managed, high-performance, extremely scalable NoSQL database service accessible through the industry-standard, open-source Apache HBase API. Under the hood, this new service is powered by Bigtable, the same database that drives nearly all of Google’s largest applications.

Google Cloud Bigtable excels at large ingestion, analytics, and data-heavy serving workloads. It's ideal for enterprises and data-driven organizations that need to handle huge volumes of data, including businesses in the financial services, AdTech, energy, biomedical, and telecommunications industries.

Cloud Bigtable delivers the following key benefits to organizations building large systems:

  • Unmatched Performance: Single-digit millisecond latency and over 2X the performance per dollar of unmanaged NoSQL alternatives.
  • Open Source Interface: Because Cloud Bigtable is accessed through the HBase API, it is natively integrated with much of the existing big data and Hadoop ecosystem and supports Google’s big data products. Additionally, data can be imported from or exported to existing HBase clusters through simple bulk ingestion tools using industry-standard formats.
  • Low Cost: By providing a fully managed service and exceptional efficiency, Cloud Bigtable’s total cost of ownership is less than half the cost of its direct competition.
  • Security: Cloud Bigtable is built with a replicated storage strategy, and all data is encrypted both in-flight and at rest.
  • Simplicity: Creating or reconfiguring a Cloud Bigtable cluster is done through a simple user interface and can be completed in less than 10 seconds. As data is put into Cloud Bigtable the backing storage scales automatically, so there’s no need to do complicated estimates of capacity requirements.
  • Maturity: Over the past 10+ years, Bigtable has driven Google’s most critical applications. In addition, the HBase API is a industry-standard interface for combined operational and analytical workloads.
To help get you started quickly, we have assembled a service partner ecosystem to enable a diverse and expanding set of Cloud Bigtable use cases for our customers. Starting today, these service partners are available to help you take a new approach to data storage in your own environment.

  • SunGard, a leading financial software and services company, can help you build a scalable, easy to manage financial data platform on Cloud Bigtable. In fact, it has already built a financial audit trail system on Cloud Bigtable which is capable of ingesting a remarkable 2.5 million trade messages per second.
  • Pythian, a global data consulting company, has integrated OpenTSDB with Cloud Bigtable to provide a monitoring and metrics collection platform.
  • CCRi is a contributor and supporter of the open source spatio-temporal database “GeoMesa.” By integrating GeoMesa with Cloud Bigtable, CCRi is able to provide a scalable platform for real-time geospatial analysis in the cloud.
  • Telit Wireless Solutions, a global leader in Internet of Things (IoT) enablement, has integrated their IoT EAP (Application Enablement Platform) "m2mAIR" with Cloud Bigtable to enable a much higher performance in data ingestion.

As of today, Google Cloud Bigtable is available as a beta release in multiple locations worldwide. We are already helping customers like Qubit migrate a multi-petabyte HBase deployment to Cloud Bigtable. We look forward to seeing what sorts of amazing, innovative applications you can create with this powerful piece of Google technology. If you have any technical questions please post them to Stack Overflow with the tag ’google-cloud-bigtable’, and if you have any feedback or feature requests please send it to the feedback list.
-Posted by Cory O’Connor, Product Manager


When we created Google App Engine, we wanted to allow developers to build the way Google does. Unencumbered by old models, bad software, or limited infrastructure, we were free to bring the innovations born inside Google out to you. This remains our aim.

We know how important flexibility is to you in the languages you write in, the deployment model you use, the tools you build with, and the infrastructure on which your software runs. Today we’re announcing a unique collaboration between AppScale and Google Cloud Platform. We are making a direct investment, in the form of contributing engineers, to drive compatibility and interoperability between AppScale and App Engine.

Together, you’ll have even more infrastructure flexibility for your apps.
With AppScale you can run your App Engine app on any physical or cloud infrastructure, wherever you want. You also have the flexibility of configuring AppScale yourself, or working with AppScale Systems to manage the infrastructure for you.

Imagine you have a subset of customers who would be better served by your infrastructure because they have custom integration requirements. You can continue to serve your worldwide customers with the power of App Engine, and route requests from specific customers down to an installation of AppScale just for them.

Imagine you have an existing data center or colocation capacity, but want to get started building apps designed with the cloud in mind. Simply install AppScale today and start building apps that are cloud-ready.

Today, AppScale exposes a subset of the 1.8 App Engine API, while the current version of App Engine is 1.9. We’re working with AppScale engineers and the broader community to add compatibility with App Engine 1.9, including Versions and Modules. We’re eager to take community feedback on feature prioritization, and specifically we’re very interested in integrating Managed VMs in to AppScale to further increase interoperability.

There are lots of details to share and we’re eager to keep you posted on our progress. We will continue to share updates on this blog, the AppScale project wiki, the AppScale Github page, and AppScale Systems’ blog. If you want to give it a try today, try AppScale’s Fast Start for running AppScale on top of Google Compute Engine.

We hope you’re as excited as we are about our work together; we can’t wait to see what new, amazing things you build with it. As always, I’m personally interested in your feedback, so please don’t hesitate to reach out with any questions, ideas, or great stories. Thanks!

-Miles Ward, Global Head of Solutions, Google Cloud Platform


Creating testing, staging, and production environments for your application is still too hard.  Today, you might run scripts to configure each environment or set up a separate server to run an open source configuration tool. Customizing and configuring the tools and testing the provisioning process takes time and effort. Additionally, if something goes wrong, you need to debug your server deployment and your tools.

Google Cloud Deployment Manager, now in beta, lets you build a description of what you want to deploy and takes care of the rest. The syntax is declarative, meaning you declare the desired outcome of your deployment, rather than the steps the system needs to take. For example, if you want to provision an auto-scaled pool of VMs, you would declaratively define the VM instance type you need, assign the VMs to a group, and configure the autoscaler and load balancer. Instead of creating and configuring each of these items through a series of command line interface calls or writing code to call the APIs, you can define these resources in a template and deploy them all through one command to Deployment Manager.

Key features of Deployment Manager:
  • Define your infrastructure deployment in a template and deploy via command line or RESTful API
  • Templates support jinja or python, so you can take advantage of programming constructs, such as loops, conditionals, and parameterized inputs for deployments requiring logic
  • UI support for viewing and deleting deployments in Google Developers Console
  • Tight integration with Google Cloud Platform resources from compute to storage to networking, which provides faster provisioning and visualization of the deployments
Screen Shot 2015-04-21 at 10.32.52 AM.png
A Sample Deployment

We’re often asked how Deployment Manager differs from existing open source configuration management systems like Puppet, Chef, SaltStack, or Ansible. Each of these are powerful frameworks for configuration management, but none are natively integrated into Cloud Platform. To truly unlock the power of intent driven management, we need a declarative system that allows you to express what you want to run, so that our internal systems can do the hard work of running it for you. Also, unlike other configuration management systems, Deployment Manager offers UI support directly in Developers Console, allowing you to view the architecture of your deployments.

Because Deployment Manager is natively supported by Cloud Platform, you don’t need to deploy or manage any additional configuration management software, and there’s no additional cost for running it. Take the complexity out of deploying your application on Cloud Platform and test drive Deployment Manager today. We also welcome your feedback feedback at

-Posted by Chris Crall, Technical Program Manager


The Cloud Innovation World Cup is part of the world€™’s leading series of innovation competitions and aims to foster groundbreaking solutions and applications for cloud computing. This year’s Cloud Innovation World Cup is looking for the next most-innovative solutions within the segment of cloud computing, and we’re proud to be a co-sponsor. Contestants can submit their solution in the categories:
  • Mobility
  • Industry 4.0
  • Smart Living
  • Urban Infrastructure
  • ICT Business Services
Finalist will be announced at the Cloud World Forum in London on the 24th of June, 2015. The award ceremony with finalist presentations will take place on July 8th 2015 at Google´s New York office.

Submission Deadline: 6th May 2015
Participants of the Cloud Innovation World Cup have the chance to develop their solution using $100,000 worth of Google Cloud Platform credits.  While registering, just select the option that you want to create your solution using Google’s Cloud Platform.
Award Details
Deadline: 06th May 2015, 11:59pm
Submit your solution and win:
  • Placement on the “€˜Hall of Fame”
  • Opportunity to present your innovative solution at the award ceremony
  • Speaking opportunities at international conferences
  • Dedicated marketing activities to promote the finalists and the category winners
  • Access to the worldwide network of Innovation World Cup Series, an opportunity to connect with important market players at a very early stage of product development
  • Business acceleration

Participants of the Cloud Innovation World Cup have the opportunity to develop their solution using Google Cloud Platform credits worth $100,000 of our co-sponsor Google for free. Learn more about the Google Cloud Platform Credits here.

Database closes: 06th May 2015, 11:59pm

For further information please visit:

Should you have any questions, please contact:

Today’s guest blogger is Fredrik Averpil, Technical Director at Industriromantik. Fredrik develops the custom computer graphics pipeline at Industriromantik, a digital production company specializing in computer generated still and moving imagery.

As a small design and visualization studio, we focus on creating beautiful 3D imagery – be it high-resolution product images or TV commercials. To successfully do this, we need to ensure we have access to enough rendering power, and at times, we find ourselves in a situation where our in-house render farm's capacity isn’t cutting it. That’s where Google Compute Engine comes in.

By taking our 3D graphics pipeline, applications, and project files to Compute Engine, we expand and contract available rendering capacity on-demand, in bursts. This enables us to increase project throughput, deliver on client requests, and handle render peak times with ease while remaining cost efficient – with the added bonus of getting us home in time for supper.

Figure 1. We created and rendered these high resolution interiors using our custom computer graphics production pipeline.

The setup
We use the very robust Pixar Tractor as our local render job manager, as it’s designed for scaling and can handle a large number of tasks simultaneously. Our local servers – which serve applications, custom tools, and project files - are mirrored to Compute Engine ahead of rendering time. This makes cloud rendering just as responsive as a local render. By making Compute Engine instances run the Tractor client, they’ll seamlessly pop up in the Tractor management dashboard in our local office. To me, pouring 1600 cores worth of instances into your local 800-core render farm reminds you how powerful the technology is.

Figure 2. Google Compute Engine  instances access the local office network through a VPN tunnel.

The basic setup of the file server is having an instance equipped with enough RAM to accommodate for good file caching performance. We use an n1-highmem-4 instance as a file server to serve 50 n1-standard-32 rendering instances. Then we attach additional persistent disk storage (in increments of 1.5TB for high IOPS) to the file server instance to hold projects and applications. Using ZFS for this pool of persistent disks, the file server's storage can be increased on-demand, even while rendering is in progress. For increased ZFS caching performance, local SSD disks can be attached to the file server instance (feature in beta). It’s all really up to what you need for your specific project. Set up will vary based on how many instances you’re planning on using, and what kind of performance you’re looking for.

Operations on the file server and file transfers can be performed over SSH from a Google Compute Engine-authenticated session, and ultimately be automated through Tractor:

# Create folder on GCE file server running on public IP address over SSH port 22
ssh -p 22 -t -t -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine "sudo mkdir -p /projects/projx/"
# Upload project files to GCE file server running on public IP address over SSH port 22
rsync -avuht -r -L --progress -e "ssh -p 22 -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine" /projects/projx/

If you store your project data in a bucket, you could also retrieve it from there:

# Copy files from bucket onto file server running on public IP address over SSH port 22
ssh -p 22 -t -t -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine "gsutil -m rsync -r gs://your-bucket/projects/projx/ /projects/projx/"

Software executing on Compute Engine (managed by Tractor) accesses software licenses served from our local office via the Internet. And also, instances running the Tractor client need to be able to contact the local Tractor server. All of this can be achieved by using the beta of VPN, as seen in figure 2 above.

Since the number of software licenses cannot be scaled on-demand, like the number of instances, we take advantage of the fastest machines available: 32-core instances, which return a 97-98% speed boost from 16-core (awesome scaling!) when rendering with V-Ray for Maya, our primary choice of renderer.

When a rendering task completes, the files can be copied back home easily, again managed by Tractor, directly after a frame render completes:

# Copy files from Google Compute Engine file server "fileserver-1" onto local machine
gcloud compute copy-files username@fileserver-1:/projects/projx/render/*.exr /local_dest_dir

Figure 3. Tractor dashboard, showing queued jobs and the task tree of a standard render job.

Avoiding manual labour and micromanagement of Compute Engine rendering is highly recommended. This is also where Tractor excels: the automation of complex processes. Daisy-chaining tasks in Tractor, such as spinning up the file server, allocating storage, and transferring files makes large and parallel jobs a breeze to manage.

Figure 4. Tractor task tree.

In figure 4, the daisy-chaining of tasks is illustrated. When initiating a project upload to the Google Compute Engine file server, a disk is attached to the file server and added to the ZFS pool. Project files are uploaded as well as the specific software versions required. No files can be uploaded before the disk storage has been attached, so in this case, some processes are waiting for other processes to complete before initiating.

With Compute Engine and its per-minute billing approach, I’ve stopped worrying and started loving the auto-scaling of instances. By having a script check in with Tractor (using its query Python API) for pending tasks every once in a while, we can spin up instances (via the Google Cloud SDK) to crunch a render and quickly wind them down when no longer needed. Now that’s micromanagement done right.

Figure 5. High resolution exterior 3D rendering for Etaget, Stockholm.

For anyone who wants to utilize Compute Engine rendering but needs a turnkey, managed solution, I’d recommend checking out the beta of Zync Render, which utilizes the excellent Google Cloud Platform infrastructure. Zync Render has its own front end UI that manages the file transfer and provides the software licenses required for rendering so you don’t have to implement a Compute Engine specific integration. This makes that part of the rendering a whole lot easier. I’m keeping my fingers crossed that Zync Render will ultimately offer their software license server for Google Compute Engine users so that we can scale licenses along with any number of instances seamlessly.

I believe that every modern digital production company dealing with 3D rendering today, regardless of size, needs to leverage affordable cloud rendering in some shape or form in order to stay competitive. I also believe that key to success is to focus on automation. The Google Cloud SDK provides excellent tools to do exactly this by pairing  the powerful Google Compute Engine together with an advanced and highly customizable render job manager, such as Pixar’s Tractor. For smaller companies or individuals who do not wish to orchestrate these advanced queuing systems themselves, Zync Render takes advantage of the Compute Engine infrastructure.

For additional computer graphics pipeline articles, tips and tricks, check out Fredrik’s blog at and for more information about Industriromantik, visit