Microservices in the Cloud with Kubernetes and Istio (Google I/O ’18)

Microservices in the Cloud with Kubernetes and Istio (Google I/O ’18)


[MUSIC PLAYING] SANDEEP DINESH: All
right, good morning. My name is Sandeep. Today, I want to talk to
you about microservices with Kubernetes and Istio. So I’m a developer advocate
on the Google Cloud. I’ve been working with
Kubernetes for the past three years. It’s grown a lot. And Istio is the next
new hot thing coming out, so you’re all here
to learn about it. So let’s go. Because we all know
microservices are awesome– when you have a traditional
application, like a monolith, you’d have all of
your logic in one app, so a Java app, Ruby
app, node, whatever. You would have all the
code in your one app. It would do all the parts. Maybe you’re using
object orientation, so you have different classes
that do different things. But at the end of the day,
it’s like one big program that does all the stuff. And there’s a lot of advantages
to doing this monolithic type of development– it’s easier to
debug, it’s easier to deploy. You just have one thing,
you just push it out, and you’re done. And then you have a
really small team, and you have a very small
app, it’s really easy to do. But the problem
with the monolith comes when your app starts
to grow a little bigger. So now you have
all of these things and all these different pieces. But you still have one app
that’s doing all of it. So any time you want to make
a change in invoicing, that means you have to
redeploy the whole thing, even though the
ETL pipeline might have not been affected at all. And maybe the invoicing part is
really good to be done in Java, the ETL pipeline you want to
use Python, and the front end, you want to use Node.js. But if you have a monolithic
app, you can’t really do that. You’re kind of stuck
into using one language. So a team that wants to make
an update to one little part has to touch the whole
thing, and things start to get really
hairy really quickly. So with microservices,
each one of these becomes its own
individual component that can be written in its own
language, that can be updated and deployed independently
of the others. And so this gives you a lot
more agility, a lot more flexibility, a lot more
speed, and a lot more cool things like that. And so you’re like, yes,
that sounds really good, why isn’t anyone using monoliths? Because once you start
growing even bigger, it gets even more complicated. Honestly, a monolith would
fall apart at this point too. But even all the advantages
of the microservice start to get really
hard to manage, because now you don’t
have just 20 services, you have 2,000 microservices. And how do you know which
one talks to which one and how the intricate spider
web of connections even works? You might have a
single request that comes into your
load balancer that gets sent to like 20,000
different microservices and then all the way back up. And anywhere that chain
breaks, your whole application looks like it’s broken. And so debugging this
becomes super hard. Deploying things, making
sure everything’s in lockstep with each other,
it becomes really difficult to try to manage
this huge stack of services. So are microservices terrible? Maybe, but we can use
a lot of tooling to try to make them more manageable. So let’s use tools to automate
this infrastructure components, automate the
networking components, automate the management
side of things. And so when you look at a
monolith versus a microservice, they both have a
lot of complexity. The difference is– and
unless you know something that I don’t, there
is no tools out there that automatically
write code for you and can maintain a 2 million
line Java app, right? It’s like some magic
AI that can write code, we’d all be out of jobs
maybe in a few years. But now we do have tools
like Kubernetes, like Docker, like Istio, like
Google Cloud that can automate infrastructure
a lot more easily. So a lot of the issues that
we have with microservice can be automated away. So let’s take a look
at some of those tools. So how many people are
familiar with Docker? I expect a lot of
hands to go up. So for those of you who are
not familiar with Docker, Docker is basically a way
to package your application. So it doesn’t matter if you’re
writing Node code, Python, Java, whatever. It doesn’t matter if
you have ImageMagick, some random library that some
13-year-old kid in Ukraine compiled– doesn’t
matter, you put it in your Docker container,
and you’re good to go. All you have to do is care
about running the container, you don’t have to care about
running what’s inside of it. So basically, you take your
code and your dependencies, and you put it into some sort
of generic container format. And so it doesn’t
matter what’s inside. As long as you can
run a container, you can run all the containers. So now you have a
container, you’ve got to actually run it on
your cluster, because logging into a single machine, and
then running Docker run, and then doing that 1,000 times,
it seems like a waste of time to me. So instead, we can use
something like Kubernetes, which is a container
orchestration system. And what we can do with
that is we can say, hey Kubernetes, run this
container like five times somewhere in my cluster. So what it’ll do is it’ll
run your containers for you automatically. And so let’s say
you wanted to run this container, that
container, that container, and that container, and you
want two copies of each. You just tell Kubernetes
that, and it figures out how to run them. And so the really
nice thing about this is if a server crashes,
it will figure out that the server crashed and
run them somewhere else. If an application crashes,
it’ll figure that out. If you want to scale it up,
you want to go from two copies to four copies, you can
say, hey, make it four, and it’ll spin up two more. You want to go down
to one, because you want to save some money,
you can say, hey, make one. It will remove three. And so this is like
a declarative syntax that makes it really easy to
manage these containers running on your cluster. You might have thousands
of these running. Trying to do it manually
is going to be impossible. And how many people have
heard of Kubernetes. And keep your hands up
if you’ve used it before. About half, OK, cool. But that’s kind of
just a starting point. Now that you have these
containers running, you actually have to
manage the services, because you have a container A
talking to container B talking to container C talking
to container D, how do you manage
that set of points going between each other? How do you set rules
on who can talk to who, who can talk to what,
how many tries should you have, how much network
traffic should you send? All this kind of stuff
becomes really complicated, and that’s where
Istio comes into play. So Istio is a service mesh,
which at the end of the day, it means that it manages your
services and the connections that they make
between themselves. So if you look at it,
we take orchestration, and we go to management
and communication. So you want to manage
how these services interact with each
other, because that’s what really causes
microservices to be so useful, is the communication
between each other. You can reuse one
service, it can talk to three different ones,
give the same information, you can mix and
match, you can do all these really powerful things. But it becomes hard to really
understand what’s happening, and Istio makes it a lot easier
to understand what’s happening and control what’s happening. So that’s enough of me just
talking about random things. Let’s actually move to a demo. And if we can go to the screen,
thank you very much, OK, and I can’t see anything. Let me– sweet. So what we’re
going to do first– actually, can we go back
to the slides real quick? I didn’t tell you what
we’re going to do. So what we’re going
to do is walk through the end-to-end story
of taking an app and making it into
a microservice. So we’re going to take an
app, put it into a container, run that container
locally, store the container on the cloud. Then, we’re going to create
a Kubernetes cluster, run our app on that
cluster, then scale it out. And then, we’re going to do
something a little bit more interesting, we’re going
to run multiple apps that work together, a true
microservices services pattern. And then, we’re going to monitor
and manage them with Istio. Awesome. So let’s move back to the demo. So here, I am using
Google Cloud Shell. If you haven’t used
it before, it’s basically a free VM
in the cloud that lets you do your development work. I’m on a Chromebook right
now running the dev build. If it all crashes, we’re
going to like jumping jacks until it comes back or
something, I don’t know. It happens before. So basically, here
I have some code. It’s a little
Node.js application. If you don’t know
JavaScript, that’s fine. It’s a really simple app. Basically, it’s a web server
that listens to slash. And it will basically
ping a downstream. So in this case, it’ll
ping time.jsontest.com. And then, it’ll
concatenate that response, and then send it to us. So let’s run this
locally right now. Nope, don’t want node modules. OK, internet– so
if I go npm-start– let’s see what happens. Cool. So it’s listening on port 3000. So I will– that’s
fine, I’ll do this. Let’s go to port 3000. So on Cloud Shell, we
can do a web preview, so it looks like we’re basically
running a local machine. Awesome. So you can see here that
we go to time.jsontest.com, and we print out our
current service name, which is test-1 version 1. And we’re seeing
the current time. So if I refresh this,
the time will change. Awesome, cool. So now, let’s take this and
put it into a Docker container. So to make a Docker container,
basically what we do is, we make something
called a Docker file. So a Docker file is basically
a set of instructions that creates this endpoint. So you can almost think
of it like a bash script, it’s basically what it is. So here we start
with node/8/alpine. So that’s kind of a base image. It gives us a bunch of the
Node stuff out of the box, so we don’t have to worry
about installing Node.js. Then, we copy in
our package.json, which has our dependencies. Run npm install to install
those dependencies, copy in our index.js, which is
our code, expose some ports, and then run npm-start. So once you run
Docker build, it’ll create a Docker container. So let’s actually build
and run that container. Just for sake of time,
I’ve already built it. Run local– so here, when
I run that container, once I’ve built it– oh, no, I need to build it. So actually, let’s build it. It’s just not found. So this might take a
little bit of time. And while we’re
building this, let’s switch to going and building
our Kubernetes cluster. So here in our Kubernetes– if I
go to Google Kubernetes Engine. Google Kubernetes Engine is
Google’s managed Kubernetes offering. And so it’s probably
the easiest way to create a production-ready
Kubernetes cluster. So what you can do is
just click Create Cluster, and you get a bunch
of options on what you can do to create this cluster. So here, you can give it a name,
you can give a description, you can make it a zonal
or a regional cluster. And this is a
really cool feature where you can have
multiple masters, so it’s highly available. So even if a zone goes
down, your cluster will still be up and running. You can choose the version
of Kubernetes you want, the size of the
cluster, and then a bunch of– not negative
21, let’s not do that. You can do a lot of
other cool features too that Google Kubernetes
Engine gives you out of the box– things like automatically
upgrading your nodes, automatically repairing them
if they get broken, logging, monitoring, and a lot more
too, for example, auto scaling. You can turn auto
scaling on, which means that if you have
more containers that can fit in your cluster, Google
Kubernetes Engine will actually automatically scale it
up to create that space. And even better, if you
have less containers, we’ll actually scale
it down to save you money, which is really cool. So a lot of that cool stuff– and then, all you’ve got
to do is click Create, and it’ll create that cluster. So I’m going to click cancel,
because I already did one. It’s like a cooking show, we
have one ready in the oven. This is still going. That’s OK. What we’ll do instead– because I’ve already built
this, and I’ve pushed it. So we’re going to do the
opposite of what I said, And we’re going to pull it down. So I kind of cheated,
but that’s OK. So let’s run that
container locally. So what we’re going
to do with Docker is, we’re going to say Docker run. And then, we’re going to
open up that port 3000. And so if we go
back here, you can see that it’s basically the
exact same thing, except now it’s running in Docker. And so the really
nice thing is, we didn’t have the change
any of our code. We just put into a
Docker container, and we’re good to go. OK, cool. So the next step is
actually pushing it to a Container Registry. And the reason why
we have to do this is because we can’t
just run containers running on our local machine. We’ve got to push them
to a secure location so that we can run
them on our cluster. And Google Container
Registry is probably the best place to put
them if you’re running Google Kubernetes Engine. You’ve got automatic
authentication and stuff and a few other features
we’ll look at it in a second. So let’s actually push it up. That’s not how you spell make– M-A-K-E. I love using make,
it makes demos super easy. So what we do is
run Docker push. Can people read that, or should
I make it a little bigger? All right, people can read it. So Docker push, and then give
the name of the container. So what will happen is, it’ll
actually go to Google Container Registry. And so here in my
Istio test container, you can see that we have the
newest one pushed up right now. And it’s tagged as version 1.0. Now, if I enabled it, one really
cool thing that it could do is vulnerability scanning. So we can actually
scan your containers for known vulnerabilities
automatically for any Ubuntu- or Alpine-based images. So that’s a really
cool thing, especially if you have older containers
that you haven’t updated in a while, you
go back and look, and you’ll have tons
of vulnerabilities. So it’s a really good
idea to check this out. It’s in beta right now, but
it’ll find a bunch of stuff wrong with your things and
tell you how to fix them. And then, you just go– usually, you just
update your containers, and you’re good to go. So it’s a really good
thing, especially running on production and
running older containers. So now we have this pushed up. We actually can start to
deploy to a Kubernetes cluster. So like I said before, we
already have a cluster created. Let’s go back there. Now, basically to connect to
it, we just run this command. All right, let’s run it. Great, so now, we can say
kubectl, which is a Kubernetes command line tool,
get nodes just to make sure everything is working. Is everything working? That’s a good question. Yes, there it is. So we have a four node
Kubernetes cluster. They’re all good to go,
running version 1.9.7. So here now that we have
our Kubernetes cluster and we’re
authenticated to it, we can start running the same
container that we ran locally. So what we’re going to do is,
do kubectl run demo, and then give it the name of that image. And demo is going to be
the name of the deployment that we’re going to create. So if I say kubectl
get deployment, and then give that namespace. The namespace is just
for this demo’s sake. Don’t worry about it, you
don’t need it real life. So you can see that we have
our demo deployment created. And if you say get pods– and so pods are basically the
containers in Kubernetes– you can see that we have our
container created, it’s ready, and it’s running. But now that it’s
running, you have to be able to
actually access it. And so this is where it gets
a little bit more complicated. Where in the old
world, with Docker, you just kind of
access it directly, because it’s just running
on your local machine. But in the Kubernetes
world, it’s running on one of
those four nodes, and we really don’t
care which one. So we want a static
endpoint that can route traffic
and access that node. So what we’re going to
do is create a service. And now I forget what
my make command is. That’s OK, I’ll just go look,
it’s like a cheat sheet. Let’s see– expose, there it is. Cool, and so what we’ll do here
is, we’ll run kubectl expose, and then we’ll give it the
target port of 3000, which is where our node.js
app is listening, and then run it on port 80,
so access a normal HTTP port. And we’ll open a
type load balancer, which will give us a public IP
address using a Google Cloud load balancer. So now, we can say get service– and I assume most people here
are familiar with Kubernetes. If you’re not familiar,
come and just talk to me. I know I’m going
kind of quickly, but there’s more exciting
things to happen soon. So I want to get to them. So you can see the external
IP address is pending, so let’s just run
a watch on that. And it’s done the moment I
try anything complicated, it just finishes, it knows. All right, cool,
let’s go to that URL. That, not copy it– all right. All right, we’ll just do this. Cool. And if you go here, you
can see the exact same app running on our cluster. So now, we actually have it
running on a public IP address. If you go there on your phone
or laptop, it will work. And what we can do– let’s say all of you go on
your phone to this IP address, it will probably
overwhelm this app. So in Kubernetes, we can
actually just scale it out with the scale command. So let’s do that. So let’s say kubectl
scale deployment demo– the name of deployment– then,
I’m going to say replicas equals 5, and again,
that namespace. Awesome. So it will say
deployment demo scaled. So now, if we go
back to our get pods, you’ll notice that
we actually have five of them running in our
cluster, which is really cool. And then, if you
say get deployments, you can see it also says that
we have 5 desired, 5 current, 5 up to date, and 5 available. So now, if we go back to this
IP address, it still works. The only difference is
it’s now round robining traffic between all five
instances of our application. So you have a lot more
scalability, a lot more robustness. And you might notice that
we’re running five copies even though we only have four VMs. That’s because we’re able to run
multiple copies on a single VM, so we get more utilization
out of our machines. Let’s see– clean. So let’s clean that up, we
no longer need that demo. Let’s start looking at Istio. So now we had a single service
running, kind of useless in the long scheme of things. No one just runs
one thing, you don’t need Kubernetes to do that. What you really want to do
is run multiple applications together and make sure they
all work together well. What we’re going to do is
run this same application three times and then
chain it together. So you might have
noticed in our code, I’m going to go back
to it for a second, that basically I
can set whatever I want as the upstream URI
using an environment variable. And so what I can do is
actually tie multiple of these together using
one to talk to the other to talk to the other, and then
finally to time.jsontest.com. And then, they will all
concatenate their responses, and finally show
it to the end user. So that’s a really cool thing. In Kubernetes, we can
actually dynamically set these environment variables
when we’re deploying. So let’s actually look
at that and so normally in Kubernetes, what
we would do is, we would create a YAML file. And I’ll make this
a little bigger. And in this YAML file, we
will define our deployments. So I’m going to make
four deployments. So the first one is going to
be called front end prod, so front end production service. As you can see here, I’m giving
it the service name front end prod. And the upstream URI, I’m
going to call it middleware. And so Kubernetes
will automatically do DNS-based service discovery. So I can actually find
my middleware application by just pointing
it at middleware, which was really cool. I don’t need IP addresses
or anything like that. And you can see
here, we’re giving it some labels, app front
end, version prod, and then the same container name
that we used before. And then, in our
middleware, everything looks exactly the same, except
it’s app middleware version prod. And then the service
name is different, and the upstream
URI is now back end. Then, we also have a Canary
version of our middleware. So it’s like our test
version that we’re just kind of working on,
but we want to see how it works in the real world. So again, this also
points back end. And then, finally
our back end, which points to time.jsontest.com. So at this point, we can
deploy it into our cluster. And so now, we’ll create
these deployments. But of course, we’ll
also need to create the corresponding services
so they can actually find each other. So let’s take a look at that. So here in our services.YAML,
it’s pretty straightforward. We have a front end, a
middleware, and a back end service. They all open up port 3000 to
port 80, which makes sense. The only difference here
is the front end service is of type load balancer so that
we get that public IP address. So let’s go to that public IP
address and see what happens. Great, OK, still
pending, that’s fine. Let’s do it one more time. There we go. OK so interesting,
it was a little bit different than before. You can see that our front end
prod went to our middleware, then the middleware
went to our back end, and our back end went
to time.jsontest.com. But we’re getting a
404, which is really weird, because I mean,
if I go to this website, clearly it works. We were seeing it working
all this time before, right? It’s working, so why
are we getting a 404? And so now, we go into
the world of Istio. So I actually have Istio
already running in this cluster. And what it’s just
going to do is, it’s going to really
lock down and help you manage your microservices. So it knows that
time.jsontest.com is an external service. And by default, it’s going to
block all traffic going out of your cluster. And this is really good
for security reasons, you don’t want your app just
talking to random endpoints on the internet. You want you as a
cluster administrator to be able to lock down and
only talk to trusted endpoints. So by default,
everything is blocked that’s going out
of your cluster. So how do you unblock it? Well, in Istio, we have
something called egress rules. So it’s a pretty simple thing. You can see it’s only
like a few lines of YAML. And so here, we’re going
to say allow traffic to time.jsontest.com on
both port 80 for HTP and 443 for HTPS. So let’s deploy that rule. And for Istio, we use the Istio
control command line tool. It’s just like the kubctl,
or kubctl, Istioctl, I don’t know how
to pronounce it, your guess is as good as mine. But it’s very similar
to the kubctl tool. And so now, we’ve created that. And we can go back to our
web site, and hit refresh, and you can see it’s all
working perfectly fine. So we’ve kind of whitelisted
traffic to time.jsontest.com. Now, you might notice
another thing going on. If I hit refresh, just look
at this line right here. Sometimes it says canary
and sometimes it says prod. And So the reason why
this is happening is, by default, Kubernetes
will use round robin load balancing for its services. So our middleware
service is pointing to any deployment
or any pod that has the tag app middleware. But both the Canary
and the prod versions both have that tag, so
Kubernetes will blindly send traffic to both. In fact, if you had
three versions of Canary and only two versions
of your prod, a disproportionate amount of
traffic would go to canary. In fact, 3 to 2. Because it’s just round robin. So with Istio, we can
actually make traffic go exactly where we want. Let’s do that real fast. So what we can do is set
something called a route rule. And basically, we
can send whenever the destination is middleware,
always send it to prod. And then, whenever the
destination is front end, send it to prod, and back into
prod, super simple things. But what this happens
is, now I hit refresh, it will always go to
our production service. So as an end user, I never
have to worry about it. I’ll always hit
production, I won’t hit some random test
build that’s running in my cluster, which
is really nice. But this is great. This app is like bulletproof,
so simple it never breaks. But in the real world,
our code is not perfect. I don’t write perfect code,
you don’t write perfect code, let’s be honest with each other. In the real world, things
break all the time. So to simulate that, I have
this really cool function called create issues. And what create
issues will do is, it’ll look at a
header called fail. And then, it will generally
generate a number, and if it’s less than that,
it’ll just return a 500. Yeah, I didn’t know how else to
make things break on purpose. It’s hard to make
things break on purpose. So if you’ve ever–
this is going to be a program called Postman. If you’ve never used it
before, it’s awesome. Basically, it lets you set
headers in json and things like that automatically. So let’s just do a
normal request right now. You can see it works just
the same way as doing a request from the web browser. But now, we can actually
send headers from this tool. And let’s set a value of 0.3. So it’s a 30% chance of failure. Let’s see what happens. Boom, back end failed,
boom, middleware failed. Everything worked,
back end failed again, everything worked again,
middleware failed, there, everything failed. And we might notice that
the problem is actually worse than we think, because
in our code what we’re actually doing is, we’re
propagating these headers. These are for tracing,
we’ll take a look at that in a second. But we’re actually
forwarding this fail header along each request. So it’s not just a 30%
chance, it’s a 30% times 30% times 30% chance of failure. And that’s where all
these cascading failures come into play
with microservices, because one failure can trigger
tons of failures downstream. So let’s take a look how
Istio can help us with this. So let’s do our
second routing rules. Let’s do something called
a simple retry policy. So let’s have Istio
automatically retry the request three times before giving up. And again, we don’t have
to change our code at all. Istio is transparently proxying
all of these network calls– proxying is a strong
word– transparently managing all these
network calls. So what happens is,
our code doesn’t have to know at all that Istio
is trying it three times. It just tries it once. And Istio actually
manages all the back off, and the retries, and
all that kind of stuff for you automatically. So you don’t have to. So now, if we go
back to Postman, let’s see what happens. Boom, working. Still some issues,
working, working, working, working, much better. Obviously, it didn’t
fix the issue. In fact, if I increase this
to something like version 0.5, you see it’s going to
fail a lot more, or not. It might not fail,
it might mask that. You know what? 0.9, all right, see, failure. And you might notice it’s
failing a lot at the front, too. So let’s take a
look at two things. One, it’s able to mask– M-A-S-K– your failures. But you don’t want
it to always do that. Just because things are working
doesn’t mean things are good. You want to be able to
detect that you have errors, and be able to detect
that and fix it. So Istio actually
gives you a lot of tooling out of the box
to manage your systems. The first thing I
want to show you is something called
service graph. So because Istio is sitting and
intercepting all those network calls, it’s actually able
to create a full picture of your services for you. So you can see here, we have a
front end talking to our prod and canary, talking
to our back end, just talking to
time.jsontest.com– we can also start doing
things, automatically start getting metrics
from our cluster as well. Wow, that looks
way too zoomed in. All right, let’s zoom
out a little bit. You can see here, once we
started adding those errors, our global success rate just
started crashing to the bottom. And so Istio will actually
automatically find your 500s, your 400s, your
volume, your QPS, your latency, all this stuff
automatically for you, and start throwing it onto
Prometheus and other dashboards so you can start putting
them into your systems, and start running metrics,
and understanding what is going on in your services. And I didn’t have to
write any code for this. Istio gives me all these
metrics out of the box for free. And then finally, we can
use something like Zipkin, or Jaeger to do tracing. So our stack level trace, a
lot of these tools all work. So let’s find some traces. So you can see here, we
can see our front end talks to our middleware
talks to our back end. And we can see that obviously
the back end takes the longest amount of time,
because it’s talking to our external service. But even more interesting
than this, you can see here, it actually has back end
2x, 2x, and middleware 2x. And I can’t find one, that’s OK. Let’s see look back one hour. That’s OK. So you can see all the
distributed tracing. And to do that, all I had to do
was forward those trace headers along. Let’s look at that for one
second, those trace headers. So as long as I’m forwarding
those trace headers, I get distributed tracing
for free out of the box. And actually, I
have another talk where I talk about open census,
which can automatically forward these for you as well– so even less work for
you as a developer. Now, let’s do one final thing. So here, you can see
when I hit refresh, I’m always going
to my prod service. But as a tester, I kind
of want to actually use a Canary service and see what
happens in this new path. And the really nice
thing is that we can do in-production
testing using Istio. Because when we have
thousands of services, it’s impossible to launch
them all in our local machine and test it that way. We want to kind of
push our service. You think it’s
working, so we push it into a production cluster, but
we don’t want any production traffic hitting it. We just want our test
traffic hitting it. So what we’re going to do is
put up the last and final route rule. And it’s going to make
a new route called the middleware canary route. And what it’s going to look for
is a header called x-dev-user. And then, whenever it sees
a value of super secret, it’s going to route
it to our Canary. So all normal traffic will
go to our production service. And then, the super
secret traffic will go to our Canary service. Obviously, you’d use
something like OAuth or some sort of
authentication scheme, not just the word super secret,
because security is important. Let’s take a look at
that, so make Canary. So let’s deploy this rule. And so now, if we go
here, obviously it will still go to our
production service. But if we go to Postman– and let’s remove
that fail header. And let’s go to x-dev-user. Boom, nope, did
I spell it wrong? I don’t know, let’s
see what happened. Oh, this is disabled,
ah-ha, there we go. Thank you. There we go, middleware Canary. So we’re able to route to a
specific service in the middle of our stack using headers. And that’s because we’re
using header propagation. So even if you had
like 2,000 services, and you want to make the
middle one, the 900th service, a special thing. By propagating these
headers, we can actually use Istio and then
route to that one, even if it’s in the
middle of our stack. So not just the front
end, we can test anything in the whole stack using
Istio and header propagation. All right, let’s switch
back to the slides, please. So we do all this stuff– thank you all so much. If you want to get a
deeper dive into Istio, it’s basically this top with
a little bit more things to focus specifically on Istio. You can check out my main
dot com slash Istio-101, it will have a YouTube video. All this code is open
source on my GitHub. Again, if you go to
that website, Istio-101, I have a link to my
GitHub repository there. You can check out Google
Kubernetes Engine at g.co/gke, istio.io, follow me on Twitter. If you have any questions,
please follow me out the door. I’m going to go staff the
office hours right now, so come talk to me there,
or talk to me right outside, or find me in the sandbox. I’m happy to answer
any questions. I know we went kind
of quickly today. But yeah, thank you all so much. Enjoy the rest of your IO,
enjoy the concert tonight. And I’ll see you out there.

Comments

  1. Post
    Author
  2. Post
    Author
  3. Post
    Author
  4. Post
    Author
  5. Post
    Author
    Sourabh Roy

    This guy is stuck with its own stuff.. If you are doing presentation atleast acknowledge compition n understand what they r doing.. Try Azure.. Although Google created kube but u guys r not doing best job with kube… Good live opensource.. Atleast people who know software development can do better.. Also world has dotnet as well at least acknowledge it.. And i don't knows why they try to be so smart amazing

  6. Post
    Author
  7. Post
    Author
  8. Post
    Author
  9. Post
    Author
  10. Post
    Author
    Muhammad Fahreza

    Super Cool.

    I wonder how to achieve process.env.UPSTREAM_URI, if you are not using Nodejs Service?

    as for me, using a python-based service, is it possible to achieve this feature?

  11. Post
    Author
  12. Post
    Author
  13. Post
    Author
    Dino Lai

    for those who use docker already and wanna know how to use Kubernetes and Istio
    11:13 – 12:42 GCP Kubernetes Cluster
    15:42 kubectl
    21:05 Istio

  14. Post
    Author
    Rob Christian

    Should change the portion of the video title from "with Kubernetes" to "with Google Kubernetes Engine"

  15. Post
    Author
  16. Post
    Author
  17. Post
    Author
    silakanveli

    Whole idea is lost with Google stuff. Video should be howto make dev process a most complicated one..nothing cool

  18. Post
    Author
  19. Post
    Author
    salim dawod

    thank you for everything but I've asked to you i wanna do project for making kubernetes manages my container in docker what should i do that please i need practical view

  20. Post
    Author
  21. Post
    Author
  22. Post
    Author
  23. Post
    Author
    guibirow

    Very boring presentation intro… 8 min of talk, 15 of a demo not related to the main subject
    if you want to save some of your time, go to 21:00 and see istio in action

  24. Post
    Author
    Manikanta Reddy P

    Hello, How can I secure Grafana and tracing URL's with credential? is there any way from Istio or I need to use proxy server?

  25. Post
    Author
  26. Post
    Author
  27. Post
    Author
    bhagya shree

    i came across kubeless which is not yet release by google but its a bitnami project we are trying to implement in our project can you please check on this topic ..

  28. Post
    Author
    sfincione2000

    Really good presentation – nice quick overview of what you can do with Istio. The example made it super easy to imagine other scenarios. Thumbs up! =)

  29. Post
    Author
  30. Post
    Author
  31. Post
    Author
    Mohit Gupta

    I wonder if you really want to add that much of complexity in your application? As an Architect I always followed the rule to Keep things simple. This just go against it. I remember when we were taught role of linking and loading in our Operating System class. It was internal to OS, OS was managing it not the end users. Whereas here you are required to configure scale, rollout and many other things.

    I wonder istio is ready for mass use. Perhaps a more user friendly abstraction layer is required.

  32. Post
    Author
  33. Post
    Author
    Jibin

    what if the service hosted in the kubernetes cluster needs to talk to a Hadoop cluster / sql server , will Istio block it? if yes how do we set the egress rule?

  34. Post
    Author
  35. Post
    Author
  36. Post
    Author
  37. Post
    Author
  38. Post
    Author
  39. Post
    Author

Leave a Reply

Your email address will not be published. Required fields are marked *