Fundamentals of Google Cloud Platform: A Guided Tour (GDD India ’17)

[MUSIC PLAYING] METE ATAMELL: Hello. And my name is Mete Atamell. I’m a Developer Advocate
at Google, based in London. MARC COHEN: [SPEAKING HINDI] METE ATAMELL: Nice. So today, we’re going
to be talking about– yeah. Thank you. Today, we’re going to be
talking about fundamentals of Google Cloud Platform. In 30 minutes, it’s going
to be a fast paced talk because we’re going to have a
lot of things to talk about. But hopefully,
we’ll go through it and you’ll learn
something new today. MARC COHEN: Yeah. Sorry. We’ll start with a
little bit of history. So there’s a really cool website
called the Wayback Machine. And if you check that out
and plug in the Google Cloud Platform, or search for Google
Cloud Platform from back in those days, you’ll see
what the actual landing page looks like. This is it. How many people were using
Google Cloud Platform in 2012? OK. So a few die hard folks there. It was very simple back then. There were only four products
in the entire platform. So it was pretty easy to
wrap your head around it and understand what it could do. Nowadays, it’s a
little bit different. We now have over 80 services. And that list continues
to grow every day. In fact, there’s so many
different capabilities and services that I can’t
fit all the little product icons on one slide. That’s a really good thing. It’s nice that we
have all these– I mean, it’s more than nice. It’s really enabling
and really powerful. You can build almost anything
you can imagine in the Cloud nowadays. But it comes at the cost
of a lot of complexity and a lot of cognitive load. So it takes quite
a bit of effort to figure out what all these
services are, when to use them, and how to use them effectively
in your applications. And that’s really the
point of this session. We’re going to take you
through an overview. We’re not going to go
too deep into anything but try to give you a taste
of all of the capabilities we have in Google
Cloud Platform. So to start off, we will
cover all the Compute options. METE ATAMELL: Yep. Thanks, Marc. So by Compute, what I
mean is that, let’s say you have a piece of code and you
want to deploy to Google Cloud. What options do you have? That’s what we
want to cover here. So at the very high level,
when we talk about Compute, there are three distinct
ways of running your code. And this is not
just Google Cloud, but basically in any Cloud
you have these options. So back in the
day, before Cloud, when you want to
deploy your application you would get a machine. You would decide how
much CPU you want. You would decide how
much memory you want. You would get a hard drive and
decide how big that should be. Then you install the
operating system. And after that, you will
install the libraries that you need on top of
the operating system. And then finally, you would get
to install your application. So with virtual machines, it’s
pretty much the similar thing except it’s virtualized. It’s not a physical
machine anymore. And you probably don’t
have it yourself. You have it in someone
else’s data center. So in virtual machines,
it’s the same idea. You pick your CPU, your
memory, your storage. And then install your operating
system and then it’s yours. It’s your responsibility to
maintain it and to run it. Now more recently, we have
something called Containers. So the idea of Containers is
that instead of virtualizing the operating system and all
the way up to your application, you want to virtualize
your application and its dependencies, or
the libraries that you need. And since you’re
not virtualizing the whole operating
system, these containers are really small. So they are really
easy to create. They’re real easy to run. And they’re really
easy to move around. So you would use
[INAUDIBLE] push and pull to move
these images around and deploy them really easily. And then lastly, server lists. So in the server
lists, you don’t really care about virtual machines. You don’t really care
about containers. You have a piece
of functionality or an application. And you just want to deploy it
and let someone else manage it for you. So it becomes someone
else’s problem, basically, how to deploy that application
and how to manage it. But as a user, you are
basically focusing on your code and just deploy your code and
let someone else manage it for you. Now, and one thing to
mention is that most people, they started with
virtual machines. Now they are switching
to containers because it makes sense and
they are much more agile than virtual machines. And they’re going
towards server lists. But most people
nowadays, they’re kind of in the container world. But containers by themselves,
they’re not enough. Because containers,
they give you a context to run
your application in a consistent way, everywhere. But when you run
your application, you need more than containers. You need resilience,
for example. So you probably
need health checks. You need redundancy,
so you probably need to run multiple containers. You need configuration,
so you need a way to define
your configuration and get the configuration
to your application. So because of all
these things that you need to do in
production, there are open source projects,
like Kubernetes, which try to run your
containers in production and make it easy for you. And even containers by itself
is not enough, nowadays. Because Kubernetes basically
makes you run your containers in production and watches
your containers, makes sure they run, but then
you need more than that. When you write a
microservice, for example, you need like service to
service authentication. You need a way to
visualize your services, so you need some
kind of dashboards. So you need login. So there’s all
these extra things that you still need to
do on top of Kubernetes. So that’s why we have
something called [INAUDIBLE].. It’s another open
source project that tries to create
this service mesh so that you don’t have to create
everything from scratch. It uses this login and
monitoring and service to service authentication
and all that good stuff. Now in terms of what options
you have on Google Cloud to deploy this, first
you need to decide how much management you want
and how much customization you want. And by definition, the more
customizable things are, they end up being less managed. And the more managed
they are, they end up being less customizable. So on the higher customizable
side, we have Compute Engine. Compute Engine is what
we call virtual machines in Google Cloud. You can get a Linux machine or
you can get a Windows machine, and we support multiple
versions of Linux and multiple
versions of Windows. And once you have the machine
with the operating system, you can pretty much
install whatever you want on that machine. It’s your machine to maintain. It’s your machine
to keep up to date and all that kind of stuff. Of course, installing all this
software on your Compute Engine instances, it takes a while. So we have something
Cloud Launcher. It’s a marketplace for solutions
to deploy to Compute Engine. So if you want to deploy,
for example, LAMP stack, or if you want to
deploy WordPress, there are solutions for that. So you can just find the
solution and with one click, just deploy that solution
to Compute Engine. So let me quickly show
you how this works. So here I am in
Google Cloud Console. This is where all the
Google Cloud products are. And we are interested
in Compute Engine. So when you go to Compute Engine
page, there is [INAUDIBLE].. So here you can see all the VMs,
virtual machines that you have. If you want to create
a new instance, you just hit Create Instance. And then you can give
your instance a name. So we will call this– let me
make this a little bit bigger– we’ll call this instance India. And then you can choose where
to deploy your instance. Since we’re in Asia, let’s
choose somewhere in Asia. You can customize
your machine type. So you can basically
choose, well, how many cores you want from 1 to 64. You can also choose your memory. On top of this, we also have
pre-configured machine types. So you can choose a
microinstance or small instance. So I just pick one of these. Next, you want to choose
your operating system. So we have all the flavors
of Linux and Windows as well. So these are the
Linux instances. We have Windows instances. I’ll just choose a Linux one. You can also install
applications. For example, if you want to
have a SQL Server instance, there’s an application
image for that. You can even create
your own images. So you can create your own
image and then deploy it here. And then you can use
it in your own project. So for this one I’m just
going to use a Linux instance, and I’m going to also allow
HTTP and HTTPS traffic and hit Create. And this will give me a Linux
instance running in the Cloud. And then once it’s
up and running, which will be like
30 seconds or so, you can [INAUDIBLE] into it. So on here, there’s s-sh. And if it’s a Windows
instance, you can RDP into that by just clicking here and you
can just directly go right in there. And I also mentioned
Cloud Launcher. So if we go here,
there’s something called Cloud Launcher. It has a bunch of different
solutions that you can install, such as Lamp Stack, WordPress. I can, for example,
search for ASP.NET. This is a solution for
deploying Windows Server. ASP.NET Framework,
IAS in SQL Express. So here I’ll just do Launch in
Compute Engine, give it a name, ASP.NET let’s say India– oop, let’s say India 2,
because I already have one– and this time let’s
choose Australia. And just keep the defaults. So with this one click, I can
get a Windows Server, IAS, SQL Express, and
ASP.NET Framework, and I can just deploy my
[INAUDIBLE] applications to Google Cloud. All right. So that’s Compute Engine. On the other end
of the spectrum, in the highly managed part,
we have Cloud Functions. And the idea of
Cloud Functions is that you define a
node.js function with inputs and outputs. And you also define how that
function should be triggered. So the trigger can
be an HTTP request or it can be a pop up message. So when a message
goes to that topic, then that triggers
the function for you. And then that’s it. You just deploy the function
and Google maintains it for you. And you don’t really care where
it’s running, how it’s running, you just specify where
your function should live, in what zone, and that’s it. So just to show you an
example of this as well. So if we go back
here, under Compute, we have Cloud Functions. And in the first page, you
can see all the functions that you have deployed. So I have a function
called HelloWorld. Once you click here, you will
see the function invocation. So how many times people
call your function. You can see what kind
of trigger it has. So this is a pop
up topic function. You can see the source
of the function. And it’s a really simple
function that takes an event and just logs out
the events message. And you can even
test it right here. So you can trigger the event
and say, let’s pass a message, and say, hello, India. And then if you
hit that function, now this is running
in the Cloud. It already run, and it
will fetch the logs, and then you will see
the console log out once you get the logs here. OK? So that’s functions. Functions are really great
and they are a really easy way of running your
code in the Cloud in a way that’s completely
like, the infrastructure is completely
transparent to you. But sometimes you need
more than a function. You need an application. So for that we have App Engine. In App Engine, it’s similar to
Cloud Functions in the sense that you deploy your
application and you don’t really care where that
application is running. But it’s more than a function. So you can have multiple
services in App Engine. You can have a front end
and you can have a back end. And you can just deploy
the whole thing together. And App Engine just
manages that for you. So let me show you
that quickly, as well. So in console, again,
it’s the same place. We go here, App Engine. The first thing that you realize
is that there’s a dashboard. So you have multiple
versions of your application. And this version is
getting 51% of the traffic. This is getting 49%. And then you can see the
graphs for latency traffic, CPU utilization,
stuff like that. The services are
the micro services that you can deploy
in App Engine. In this case, I’m just
having one default service. And in other versions, you
can see the different versions that you have. So I have two versions
running on two instances. So these are running on two VMs. And this is auto
scaled by default. So you can autoscale
from 2 to 20 automatically, so
you don’t have to do anything special for that. If you want to change
the traffic allocation so that, let’s say your new
version gets all the traffic, I can easily change
this to 100%, hit Save, and this will direct all the
traffic now to the new version. So that’s App Engine. And the last thing
I want to talk about is Kubernetes Engine. So Kubernetes Engine
is basically a way to run containers in the Cloud. Kubernetes is an
open source project and it can run anywhere. But we try to make it really
easy to run in Google Cloud, so that’s what
Kubernetes Engine is. So we maintain the
master for you. And we also give you an easy
way to create your cluster and schedule your containers. So it sits between serverless
world and the virtual machine world. So if you want to be
somewhere in the middle where you have containers, you can use
Kubernetes Engine to do that. And then we have tools. We have Container Builder
and Container Registry for containers. So you can create
containers in the Cloud using Container Builder. It’s a really fast way
of building containers. And once the
containers are built, they are saved into a place
called Container Registry. This is a private space
for your containers. And once your content
is safe there, you can deploy it to
Kubernetes or you can deploy it to App Engine really easily. So that’s how you run
your code in the Cloud. And one thing that
I want to point out, when you’re running your
service in the Cloud, you’re running in
the same network that powers all these products. So the same network that powers
Google Search, Gmail, Map, serving one billion user each. You’re running in the
same network as well. So all the improvements
and optimizations we do in our network
for our own services, they benefit you
indirectly because of that. And this is how our
network looks like. You can see all the
edge end points, and you can see all the
cables all around the world. So it’s a truly global,
highly optimized fiber network all around the world. OK. So that’s how you run your code. Now let’s take a look at
how you store your data. MARC COHEN: Thanks, Mete. At a high level,
storage is easy. You write some bits
and you read the bits and hopefully you get the same
thing out that you put in. I have the feeling
that some of the folks here are looking at my
graphic and wondering what these things are. It’s a young crowd. But actually, it’s
pretty complicated stuff. So anybody that implements
a real world storage system knows about this. There’s all kinds of
consistency issues, asset semantics, scaling
issues, replications, all sorts of really
fundamental computer science challenges crop up in building
complex storage systems. And there’s a lot of
different ways to store data. Right? So this is kind of
a very high level summary of all the
different products and what they’re good for. And I’m not going to– it’s
a bit of an eye chart– I’m not going to walk through
every product, one at a time. But I will kind of break it
down into some categories. When I look at this, I kind
of have the same reaction I had with the 80 icons slide. It’s like, I just want
to store some bytes. Why do I have to
learn all this stuff? And I’m going to go through
a few different subcategories to hopefully help clarify this. So the first category I like to
think about is structured data. And the mental model for
this is a spreadsheet. So this is where
you have, your data has got a well-defined
sort of set of fields. And the rows in the
spreadsheet correspond to the instances of your data. And the columns corresponding
to the fields in your schema. So this is a pretty
familiar way to go. And you know, this is the
classic relational database system. And it’s driven most frequently
by a very standardized query system called SQL. And we have two different
products in this domain. We have Cloud SQL,
which gives you the ability to create managed
database servers in the Cloud. So you’re still
thinking about servers and you have to decide if
you want MySQL or Postgres. But the backup of the database,
the administration of it, the maintenance keeping it up,
and just managing those servers is taken care of for you. So it takes a lot of the
hassle out of your hands. And then Spanner
is kind of a level of abstraction above that. It gives you a true database
as a service capability where it’s global, has
all of the asset semantics you would expect from a
relational database management system, and it’s
multi-region, very scalable, and has really a
tremendous set of value for a database service. The next category is
unstructured data. This is also called NoSQL. So the mental model
I have for this one is it’s a little
bit like a library. You’re storing data in a
library, or books in a library, I should say. So there’s a very
efficient index that you can use to find
any book on any shelf. But the contents is
totally unregulated. Right? The librarian doesn’t care
what’s in any particular book. And there’s no uniformity. All the books have
different content. And so it’s kind of a
little bit like what a NoSQL database is like. There’s a key that gets you
to the value you care about. But the content or structure
of that value is up to you. And we have Data Store,
which is hierarchical distributed key value store. Firestore, which is ideal
for mobile applications and real time response
type scenarios. And Big Table, which is really
well suited for huge volume transactional– financial
transactions, IoT, event sources– things like that. And all of these are, again,
managed services in the Cloud. You don’t have to
think about where are the nodes, how
many do I need, how to configure them
and install software. Just make your database
calls and everything else is taken care of for you. The last category I want to
talk about is an Object Store. So the idea here is it’s a
lot like renting a storage unit at one of
these places where you where you do that, where
you rent a locker or something like that. And you know, there’s
a physical space there where you can put your stuff. And it doesn’t matter
what you put in there. It’s up to you. And that’s kind of what these
objects stores are like. You have a bucket and
you own that bucket. You can put as many
objects in it as you want. There’s no rules about the
content that goes into them. And you have security
protections on it, which is kind of like
your key to the locker. And you can serve those
objects from the Cloud. You can even serve
web sites directly from these storage buckets. So that’s kind of
a quick overview of the different storage
products we have. There’s a lot more to it. But I did want to jump
into Spanner again and just highlight it. Because I think it’s really
interesting and really revolutionary in many ways. What Spanner does is
it kind of bridges the gap between traditional
relational database systems, which are very strong in
terms of data integrity and consistency, but have
always been a challenge in terms of horizontal scalability. Right? Lot of people have had
to do sharding and lots of elaborate techniques
to scale up these systems. And then on the
other hand, you have the non-relational
systems, the NoSQL systems, which are great with
horizontal scaling but they’re often not giving
strong consistency, or as much asset semantics as the
relational database families. And what Spanner does
is it kind of gives you the best of both worlds. So I’m just going
to quickly jump into a little demo on Spanner. Let’s see. Don’t see it. There we go. So we’ve got an
instance of Spanner. You can think of an instance
as a collection of databases. I’ve got one here that
I’ve called GDD India. And inside that instance is
a database called University. So I’ve kind of fabricated
a typical database for university, consisting
of three tables. I can drill down
into these tables. Professor has a
collection of professors with a professor ID
and a name, and so on. And basically you’re looking at
something that looks very much and feels very much like a
traditional relational database management system. But I have no concept
of where the software is running or keeping it
administered or anything. It’s all taken care of for me. I can also manage the
schema directly here. So let’s imagine
that the student table needs another field. Let’s decide to
add a– right now I have a name, a major,
and a student ID. Let’s add a GPA. So we’ll say GPA
and we’ll give it a float type, a Done and Save. And Spanner will
basically tell me that it’s going to, eventually– the old schema will
continue being served until the update completes. And so it’s going to now update
my database in real time. But the nice thing is it’s never
going to take the service down. I can continue to serve
tremendously high volumes of requests while it’s updating
my database in the background, which is really convenient. The other nice thing is,
let’s suppose that I just, I’m wildly successful
with my database and the traffic far
exceeds the capacity. I can simply drill
down into my instance, I can click Edit instance, and
there’s a place here called Node, a field called Node. I can change that from
1 to 3 and click Save. I’m going to pay
for more capacity if I use it, obviously. But I’ve just tripled the
capacity of my database. I didn’t have to
talk about machines, where they might live,
what I might do with them. I just said triple my capacity. And the database
is now going to be able to serve three times
the amount of requests. And that’s it for Spanner. Next up, we’ll cover big data. METE ATAMELL: Yes. Thanks Marc. So once you give people
a way to store data, they’re going to
store a lot of data. And then data gets really big. So how do you deal with that? How do you deal with big data? That’s what we’re
going to cover here. So big data processing at
Google started with MapReduce. Who heard of MapReduce,
or use MapReduce? OK. So people know. It was a paper in
2004 that came out. You can read it. It’s a really nice
paper to read. And in MapReduce,
we basically explain how to take lots of data,
divide that into small chunks, and then send them to
multiple parallel machines and then process that
data on those machines and then combine the results
in the end to get the result. So after MapReduce, there was
a lot of innovation at Google. So we had things like
Big Table, and then we had Meal Wheel for
stream processing, we had Flume Java for high
level pipeline modeling, and we had Spanner that
Marc just talked about. And most of these,
they were either papers or they were internal
implementations at Google. So it wasn’t available
to the outside world. We recently, of
course, started making these available to
people via Google Cloud. So Pops Up and Spanner, they
are now services in Google Cloud that you can use. But back in the day, they
weren’t available to people. So people at Apache, they seen
that and they read these papers and they started creating
open source projects. So they created Hadoop,
which is an open source implementation of MapReduce. They created Spark and Hive and
all these open source projects. But because the innovation
was split into two, we have two products in
Google Cloud to support both. So all the innovation at
Google ended up in something called Cloud Data Flow. Its a model and also a service,
a fully managed service to do batch and
stream processing. And if you’re already
doing Hadoop and Spark in the open source, you
can bring those clusters to Google Cloud. So there’s that product called
Cloud Data Flow, where you can get a cluster in 90 seconds. And then just run your
Hadoop and Spark clusters in Google Cloud. And when you are
dealing with big data, there’s a lifecycle that
you need to go through. So Marc will explain to us
what that life cycle is. MARC COHEN: Thanks. Yes. So the typical lifecycle
involves five steps. First off, you need
to capture some data. And there’s a lot of products
available to do that. The one I’m going
to highlight today is Cloud Pub/Sub, which
is a message queue. You can have multiple providers
and multiple subscribers. And so we can motivate
this a little bit with a real world example. Imagine that we have
a very popular web site like Wikipedia. And we want to capture
all of the events from people viewing
pages on Wikipedia. We could stream those
events into Pub/Sub and then we could have
those events routed into our data processing phase. And the data processing
might involve something like data prep, which is a kind
of data cleansing, rearranging, normalizing service. And then from there, it might
be fed into Cloud Data Flow, perhaps. Now Cloud Data Flow
is, just as Mete was speaking about earlier,
is a processing stage very much based on MapReduce. It can perform mass
bulk processing on batches at a time,
or it can process streams as they’re
arriving, which is great for IoT and
transaction based things. From there, once we
process the data, we might want to store
the intermediate results so we can analyze it without
having to continually reprocess it. There are lots of places we
can store it, as we just saw. One of the key ones if
we’re doing analytics might be BigQuery. So BigQuery gives us
this columnar database that enables very
fast SQL based storage of the data we’ve stored. So from there, we can use
tools to analyze the data. There are lots of
different tools. But the one I’ll show you
in a moment is BigQuery. We’ve got it in the database. And then we’ll do
some queries on this. And finally, we might
want to do something with these analytics we’ve done. We’ve done our
interactive analysis. And we might want
to have a nice way to share it with other folks,
coworkers, rest of the world. And that’s where
Data Studio comes in. That’s a service for
building dashboards, nice visualizations. And Data Lab, which is
a hosted in the Cloud, Jupiter Notebook service. So for people that like
interactive notebooks– it’s particularly popular
in the data science world– Data Lab is something that you
might want to take a look at. So here’s what
BigQuery looks like. And what I’m doing is I’m
running a query on a data set that actually
corresponds to the example I just talked about. So I have all of the data
for page views on Wikipedia from May of 2016. And I’m going to
run a very simple– can people see this? Going to run a very
simple query that’s just going to sum the requests. And we can just wait
while it does that. And it tells us– hopefully soon,
took 5.2 seconds– it tells us how much data
was actually processed. So it was 38.6, almost 40
gigabytes worth of data. It just did a scan
in five seconds. And it told us the number of
events, as you can see here. I think this is 19 billion. So almost 20 billion rows it
scanned in that amount of time. Now we’ll make it just a
little bit more complicated. I’ll do a query here
for all the articles that reference Bangalore. So I’ll pull this up here. Now it’s a little bit tricky. I could just match Bangalore. But in English, there are
two spellings, Bangalore and Bangaludu. And so the way I’d like
to find both of those is to do a regular expression. So here I’m matching
Bangal and then OR orulu. And sorry– that’s right here. And what this is going
to do is actually run a regular expression
match on 20 billion rows in a database. And you saw it did that
in 7 and 1/2 seconds. And by the way, this no
cache results up here is an option you can specify. Normally, I would turn
that off because if I run the same query
again, I want to see the cache for speed reasons. But I have that
disabled because I want it to be a more true test. But as you can see, it
did not only a table scan, but a regular expression on 20
billion rows in 7.5 seconds. So that’s the power of BigQuery. METE ATAMELL: And lastly, let’s
talk about machine learning. Because everyone wants to
hear about machine learning. If you look at this
chart, this basically shows the amount of
deep learning models at Google with
different products. So you can see the
exponential growth of machine learning at Google. And this is true
not just for Google, but for pretty
much all companies. So when it comes to
machine learning, there are two distinct ways
of using machine learning. The easy way is to let
someone learn machine learning and train a model
using machine learning. And then you just consume that
machine learning using an API. So that’s the easy way. But sometimes the trained
model is not enough for you because maybe the trained model
is not exactly what you want. So in that case, you
need to actually build your own machine. And so you need to
create your own model. You need to train
it in parallel. And you also want
to serve that model. So that’s the second way
of using machine learning. So at Google, in terms
of using the models, we have a number of APIs. So there’s Speech API for
speech to text recognition. There’s a Vision API
for image recognition. And there’s more and
more that we keep adding. So this is the way that you
can consume machine learning. And we are just
basically exposing the model that we
created over the years to you, with a simple API. So let me just quickly
show you the Vision API. So in Vision API, I have demo
here, you can pass in an image and it tells you everything
it can about that image. So in this case, we
have a cat image, because in any machine learning
demo, you have to show a cat. That’s the rule. So what you get is that you
basically get a JSON back. It kind of looks like this. But in a graphical
way, you can basically see that the Vision
API is telling us that this is a cat, 99%. It’s a mammal, 97%. And it’s even
telling us that it’s with British short
haired cat, 96%. And it can pick up the color. And it can tell us
whether this image is an adult image or a spoof
image, stuff like that. If we have an image with
text, like this one, it can pick the
text from the image. So this is a traffic sign. So the Vision API is
already telling us that it’s a traffic sign. And then from here, it
can pick up the text and can tell us where it is
in the image the text is. And I’ll show you
one more with people. So when we have people in
the picture, Vision API, it doesn’t protect people but
it detects people’s expressions. So in this case,
for example, it’s telling us that it’s a
social group with some folk dance, which is right. But then, if I turned this on,
if I go here and turn this on, you can see people,
faces, and then you can see that person, too. I guess this person is
joyful, which is nice. So that’s Vision API. And lastly, I want to talk
about building your own machine learning models, because you
cannot always consume machine learning models as it is. So to build them there is
something called TensorFlow, and Marc can tell
us more about that. MARC COHEN: Yeah. So TensorFlow is
really the technology underlying a lot of the machine
learning algorithms at Google. It’s an open source project that
was created by the Google Brain Team, was released
about two years ago, and it’s become one of the most
popular open source machine learning libraries on GitHub. It’s really powerful. You can build your
models locally, upload them to the Cloud, you
can use them to deploy models on mobile devices. And it’s also got
great support for GPUs and other types of
hardware assist. So very powerful. There’s a link here and I’m
sure we’ll get the slides to you through the organization here. But there’s a link to a
really nice article by one of our colleagues that gives a
great overview of TensorFlow. This graphic just
shows, very quickly, how much TensorFlow
has taken off. There are a lot of
tools out there. And it’s good to know a
few of these, I think. But TensorFlow seems
like a really good one to invest in because it’s gotten
so much support, so quickly. The product that we’re
using to kind of make this stuff more approachable
and more accessible is Google Cloud Machine
Learning Engine. And the idea here is that, just
like in the case with Compute where you don’t really
necessarily want to care about all the details of the
underlying infrastructure– you just want to write
your code and think about your application– the same kind of thinking
applies in machine learning. You want to think
about your model, training it, debugging
it, testing it, making sure it’s ready for production. You don’t want to
really be burdened with running machines
and parallel processing and all the kind of
lower level stuff that Google’s very
good at and has lots of people doing all the time. So this gives you that
kind of abstraction layer. You can specify kind of a
meta definition of your model using YAML. And then you can use the G Cloud
command to upload your model into the Cloud for training. You can monitor the progress
through the same tool we use to monitor or other
applications, Stackdriver, Logging. And then you end up
with trained models that you can operate on
almost like applications, as Mete showed
earlier in App Engine. You can test those
models independently through different
URLs, and you can change the settings
for which one you want to serve, by
default, and so on. So again, it’s really
taking you away from thinking about machines
and implementation details, to just operating on your model
as an abstraction of its own. The other really
powerful thing is you’re going to get access
to the Tensor Processing Unit, which is custom
ASIC hardware that Google has developed for
its own internal use. The initial version was
used internally only for, on the order of a
year and a half. And it’s very powerful, but it
was only for training models. And the new version, which
is becoming more available publicly, is much faster and
it’s also available for both speeding up training and serving
of machine learning models. So that’s all we have. I want to leave you
with some resources. The main site for all
information about Google Cloud is The console we’ve been using
is I’m a great lover of code labs,
interactive training tools, and tutorials. And if you go to,
you can find all the code labs we ever published. There’s something like on
the order of 400 of them. And there’s a code labs area
across the way that many of you probably have already seen. You can get the Cloud
specific code labs, with And then finally,
there’s a training page there for finding out more about
our formal training programs. One important thing I
wanted to tell you about is the free tier. If you go to, you can get $300 worth
of Cloud Credits, usable across the
period of one year. This is a great way to
get started without having to commit any money upfront. And I think we both
believe that the best way to learn about
this technology is to actually build
something with it. So I’d encourage all of
you, if you’re not already working with it, try it out. Use the free tier. Try building some
applications and see what your experience is like. And we always love to hear, good
or bad, what you’re finding. So let us know. We will be over in
the Cloud Office Hours area after this talk. That’s all we had. Thank you very much
for your attention. METE ATAMELL: Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *