Tracking the Internet into the 21st Century with Vint Cerf

Tracking the Internet into the 21st Century with Vint Cerf


RANDY KNAFLIC: –close. We’ll get things kicked off. So first of all, welcome. And to all the Americans
in the house, having Thanksgiving. A few. All right. All right. Good. So I’m Randy Knaflic. I head up Google’s recruiting
machine here in EMEA, and I’m based here in Zurich. And I just wanted to– I get the pleasure and the honor
of introducing our guest this evening. And since we have all of you
here, I thought it would be good chance to just spend just
a couple minutes because some of you may or may not be
familiar with this search company called Google. So I’ll do my best to give you
a little education on that. But probably it’s better just
to tell you little bit about Google here in Zurich,
Switzerland. So we’ve actually been here
for nearly four years now. And started with actually
one engineer. And we have the host of having,
actually, that one engineer in the front row. So if you’re interested in
seeing how you go from one engineer to a site that’s over
350 in a very short time, he can tell you how it’s
expanded quickly. And I can tell you– I had a little bit more hair
when this all started. So Google’s idea was to
basically take the model that worked really well in Mountain
View, the engineering model that worked there, and basically
recognizing that there’s this one little fact. That is, there are a lot of
engineers in the world. And guess what? They don’t all sit in Mountain
View, California. So, believe it or not, they
decided they wanted to bring this model a little bit
closer to the masses. And Zurich was the first site
where they decided to do that. And since then, we’ve actually
grown to having 12 engineering sites throughout EMEA and a
number of different locations in all the major cities that
have engineering talent. And it’s worked really well. And so having the engineering
sites in these locations is enabling us to do things both
for the global market and for the local market. So most of you are
here in Zurich. And whether you knew this or
not, next time you’re on Google, check out Google Maps
and all those really cool things that you can do like
see what time the bus– the bus outside the Hurlimann-Areal that comes every– I think it’s, what, four days? It’s a really bad schedule
on that. But you would see
that schedule. And that was done by
our engineers here. Not not the bus schedule,
the letting you know when it will come. We’re working on that as well. We’ll get back to you on that. So we’re continuing to grow. And we’ll continue to tap into
the talent that is in not only Switzerland, but beyond. This site is very special, and
it’s probably no secret that Zurich is a great
place to live. And we’re able to track a lot
of talent from all over. In fact, we have over 42
different countries represented just in
this one location. Actually, I should say two
locations, because this is now our second site that all of you
will have an opportunity– those who opted for a tour will
get to see our new site here in the Hurlimann-Areal. So that leads us to the next
step, which is actually we have a returning visitor and our
guest and honorable guest for this evening. And it’s fun, because every time
Vint comes back, he gets to see how– he comes. There’s 30, 40 engineers here. He runs away. Comes back, there’s
300 and some here. So we’re going to
keep doing that. We’re hoping now the next time
he comes back, we even have this entire building filled and
we’ll continue to grow. So he’s a man who needs very
little introduction. He’s actually the real father of
the internet, despite what some Americans think
about Mr. Gore. The work that he did in creating
the TCP/IP protocols is obviously what has enabled
companies like Google to exist. So when he joined in
September of 2005, this was a huge honor for us and a great
win to have such a visionary on board. And we’re very pleased
to have him. And it’s with my great
pleasure that I introduce Vint Cerf. VINT CERF: Thank you
very, very much. And I really appreciate
everyone’s taking time out, especially in the middle
of the week, in the evening, to join us. It’s a real pleasure to have you
here on our Google campus. I don’t know where Renee
LaPlante is right now, but it’s her birthday today. Renee, where are you? Happy birthday. As as for the crack about Al
Gore, there’s always some nincompoop who brings that up. Al Gore deserves credit for what
he did as a senator and as vice-president. He actually helped to pass
legislation that enabled NSFNET backbone to grow and to
permit commercial traffic to flow on the government-sponsored backbones in the US. Had he not done that, it’s
pretty likely that the commercial sector would not have
seen an opportunity to create a commercial internet
that all of us can enjoy. So he does deserve some credit
for what he’s done. I meant to start out– [SPEAKING GERMAN] And that’s all the German you’re
going to get tonight. I did spend a very pleasant six
months in the area around Stuttgart as a young student
at Stanford University. Well, my purpose tonight is to
give you a sense for what’s happening to the internet today
and where we think it’s headed in the future. And I thought I would also take
advantage of the time to give you little glimpses of what
it was like in the early stages of the internet. But first, let me explain
something about my title. I’m Google’s Chief Internet
Evangelist. It wasn’t a title that I asked for. When I joined the company,
they said, what title you want? And I suggested Archduke. And they said, well, that
doesn’t quite fit in any of our nomenclature. And they pointed out that the
previous Archduke was Ferdinand, and he was
assassinated in 1914, and it started World War I. So maybe
that’s not a title that you want to have. And they suggested
that, considering what I’ve been doing for the
last 35 years, that I ought to become the Internet Evangelist
for Google. So I showed up on first day of
work wearing this outfit that I guess you see over
on your right. This is the formal academic
robes of the University of the Balearic Islands. And it was the most
ecclesiastical outfit that I owned. So I showed up wearing
that on my first day of work at Google. And Eric Schmidt took
this picture. It’s not often you can find an
opportunity to wear something looking like that. So I took advantage of that
for that one day. Well, let me just start out
by reminding you of some statistics of the internet
over the last 10 years. 10 years ago, I would have been
very excited to tell you there were 22 million machines
on the internet. Now, there are almost
500 million million. And these are servers, the web
servers, the email servers, and the like. It’s not the machines that are
episodically connected, like laptops or personal digital
assistants. The number of users on the net
has grown to 1.2 billion, which sounds like a big number
until you realize that there are 6 and 1/2 billion
people in the world. So the chief internet evangelist
has 5.3 billion people to convert
to internet use. So I have a long ways to go. The other thing which has been
happening in the telecom environment over the last decade
has been the rapid influx of mobiles. The estimates now are that there
are 3 billion by the end of this year that are in use,
2.3 billion accounts, and 3 billion mobiles, which means
there are 700 million people with more than one mobile. What’s important to us and
others in the internet environment is that many people
will have their first introduction to the internet
through a mobile and not through a laptop or a desktop. There are estimated to be about
10% of all the mobiles in use that are internet
enabled. And so as time goes on and more
and more of these devices become part of the landscape, an
increasing number of people in the world will have their
first introduction to the internet by way of a mobile as
opposed to other instruments. If we look at the distribution
of users on the network, the first thing that strikes me,
anyway, is that 10 years ago, North America would have been
the largest absolute population of internet users. But today, it’s Asia, which
includes China, and India, and Indonesia, and Malaysia,
Japan, and so on. But interestingly enough, this
large number, 460 million people, represent only 12% of
the population in that region. So as they reach the same
penetrations as we have, for example, in Europe, at 42%, the
absolute numbers of Asian users will increase. This tells you something about
what to expect in terms of the content of the internet. The kinds of interests that
people will have, the culture and style of use of the net
will all be strongly influenced by our colleagues
living in that region. Europe has almost 338 million
users, with a penetration of about 42%. I’ve given up trying to make any
predictions at all about Europe because you keep
re-defining yourselves by adding countries to. So whatever happens is
going to happen. Africa is at the bottom of the
list here in terms of the percentage penetration. There are a billion people
living in Africa, but very few of them, 40 million of them,
have access to the internet. It’s a big challenge there. The telecom infrastructures
are still fairly immature. The economies very pretty
dramatically. And so getting them up and
running on the internet is an important task, and one which
is a significant challenge. I thought you’d find it amusing
to go way back into history see the beginnings
of the predecessor to the internet, the ARPANET. There was a four node system
that was set up in December of 1969. And I was fortunate enough to be
a graduate student at UCLA in September of 1969. I programmed the software that
connected a sigma seven computer up to the first
node of the ARPANET. The sigma seven is
now in a museum. And some people think I
should be there too. But this was the beginning of
wide area packet switching. And it was a grand experiment
to see whether or not that technology would actually
support rapid fire interactions among time
shared machines. And indeed, it worked out. The packet switch was
called an IMP, an Interface Message Processor. This is what it looked like. It was delivered by Bolt,
Baranek, and Newman, the company in Cambridge,
Massachusetts, in a very, very heavy-duty metal box. They knew that this was
a military contract. And they didn’t know whether
they would be dropping these things out of helicopters
or what. So they put it in a very, very
heavy duty container. Considering it was installed at
UCLA surrounded by graduate and undergraduate students,
probably this heavy-duty container was exactly the right
thing, even if it never deployed into any other place. This picture was actually
taken in 1994. It was the 25th anniversary
of the ARPANET. The guy on the far left is Jon
Postel, who all by himself managed the RFC series as the
editor, managed the allocation of IP address space, and managed
the delegation of top level domains in the domain name
system for over 25 years. You can imagine, though, by the
time 1996 rolled around, the beginnings of the dot boom
have happened, Netscape Communications has done its IPO,
the general public has discovered internet, and we’re
off and running, Jon realised that that function that he
performed needed to be institutionalized. And so he began a process of
trying to figure out how to create an organization that
would perform these functions. In the end, an organization
called ICANN, the Internet Corporation for Assigned Names
and Numbers, was created in 1998 to take over the
responsibilities of domain name management, internet
address allocation, and the maintenance of all the protocol
tables associated with the internet protocols. Sadly, Jon Postel passed away
two weeks before ICANN was actually formally created. But it has now persisted
since 1998. I had the honor of serving as
chairman of the board of ICANN until just about
two weeks ago. And I managed to escape because
my sentence was up. There are term limits
in the bylaws. And it said I couldn’t serve
anymore on the board. And, frankly, I was pleased
to turn this over to my successor, a man named Peter
Dengate Thrush, who is from New Zealand. So the point I want to make
here, apart from the amusing diagram of 1994, is that the
internet it is now and has always had a certain
international character to it. The guy in the middle
is Steve Crocker. He was the man who really blazed
trails in computer network protocols. He ran what was called the
Network Working Group. which was a collection of
graduate students in various in computer science departments
around the United States, developing the first
host to host protocols. It was called NCP, Network
Communications Program. And Steve was the primary
leader that developed that protocol. It was used until 1983, when
the TCP/IP protocols were introduced to a multi-network
system. You’ll notice that we tried to
demonstrate in this picture for Newsweek how primitive
computer communications was in the 1960s. It took us almost eight hours
to set up this shot. And we drew all these pictures
on foolscap paper of the clouds of networks. And then we had to buy some
zucchinis and yellow squash and string them together. You’ll notice that this network
never would work because it was either ear to
ear or mouth to mouth. But there was no mouth to ear. We posed it this way on purpose,
hoping there would be a few geek readers of Newsweek
who would get the joke. This is what I looked like when
the TCP/IP protocols were being developed. I demonstrated– not TCP/IP but the ARPANET from
South Africa in 1974. This was actually an interesting
experience because we brought an acoustic coupler
with us into South Africa. It was the first time that the
South African telecom company had ever allowed a foreign
object to be connected to their telephone system. And they were very concerned
that this acoustic coupler might do damage to
their network. So we managed to persuade them
that it would be OK. And we connected the terminals
in South Africa to the ARPANET by way of a satellite link all
the way back to New York at the blazing speed of 300
bits per second. Well, the internet got started
in part because Bob Kahn, the other half of the basic design
of the internet, told me that in the Defense Department, he
was looking at how to do computers in command
and control. And if you were really serious
about putting computers where the military needed to be, you
had to have computers running in mechanized infantry vehicles,
and in tanks, and other things. And you couldn’t pull wires
behind them, because the tanks would run over the wires
and break them. So you needed radio for that. And you also needed to have
ships at sea communicating with each other. And since they couldn’t pull
cables behind them, because they’d get tangled in knots,
instead you needed to have satellite communication for
wide area linkages. And, of course, we needed
wireline communications for fixed installations. So there were three networks
that were part of the internet development. One was called the packet
radio net for mobile ground radio. Another was packet satellite
using Intelsat 4A across the Atlantic to permit multiple
ground stations to compete for access to a shared
communications channel. And then the ARPANET, the
predecessor, which was based on wirelines. SRI International ran the packet
radio test bed in the San Francisco Bay area
during the 1970s. And they built this nondescript
panel van as a way of testing packet radio by
driving up and down the Bayshore Freeway and
occasionally stopping to make detailed measurements of packet
loss, signal to noise ratio, the effects of shot noise
from cars running back and forth nearby the van. The story goes that one day
they’d pulled off to the side of the road, and the driver, who
was another engineer, got out from the cab, and went
around, and got into the back of the van. They were making a bunch
of measurements. And some police car pulled up
and noticed that there was nobody in the cab. So he went around and
knocked on the door. And, of course, they
opened up the door. And this policeman looks in. And he sees a bunch of hairy,
geeky people with computers, and displays, and radios,
and everything else. And he says, who are you? And somebody says, oh, we
work for the government. And he looks at him, and he
says, which government? But officer, we were only going
50 kilobits per second. Well, I remember in 1979, after
we demonstrated that this technology actually worked,
that we wanted to convince the US Army that they
ought to seriously try these out in field exercises. So I had a bunch of guys from
Fort Bragg 18th Airborne Corps coming out to drive around on
the Bayshore to actually see how it worked. And later, we actually deployed
packet radios at Fort Bragg for field testing. They were big one cubic
foot devices that cost $50,000 each. They ran at 100 kilobits and
400 kilobits a second. They used spread-spectrum
communications. Now, this is pretty advanced
considering it’s 1975. And, of course, the physical
size of the radio tells you something about the nature of
the electronics that were available at the time
to implement this. This is a view of the
inside of that van. Something else that was very
interesting in all of this is that some of you will be
familiar with voice over IP. Maybe many of you are using
Google Talk, or Skype, or one of the other applications,
iChat. We were testing packetized
speech in the mid-1970s. So, in fact, this is not such
a new thing, after all. In the case of the packet radio
network and the ARPANET, we were trying to put a
packetized speech over what turned out to be a 50 kilobit
backbone in the ARPANET. And all of you know that when
you digitize speech, normally it’s a 64 kilobit stream. So cramming 64 kilobits per
second into a 50 kilobit channel is a little
bit difficult. And in fact, we wanted to carry
more than one voice stream in this backbone. So we compressed the voice down
to 1,800 bits per second using what was called linear
predictive code with 10 parameters. All that means is it the voice
track was modeled as a stack of 10 cylinders whose diameter
was changing as the voice would speak. And this stack was excited
by a Formant frequency. You would send only the
diameters of each of the cylinders plus the Formant
frequency to the other side. You’d do an inverse calculation
to produce sound and hope that somehow it
would be intelligible on the other end. Speaking of intelligible, I am
struggling right now with a– it’s vodka. This should be very interesting
by the end of it. So part of the problem with this
compression ratio, going from 64 kilobits down to 1,800
bits per second, is that you lose a certain amount
of quality of voice. And so when you spoke through
this system, it basically made everyone sound like a
drunken Norwegian. The day came when I had
to– sorry about this. Wow. The day came when I had to
demonstrate this to a bunch of generals in the Pentagon. And I got to thinking, how
am I going to do this? And then I remembered that
one of the guys that was participating in this experiment
was from the Norwegian Defense Research
Establishment. His name was his name
was Yngvar Lundh. And so we got the idea that we’d
have him speak through the ordinary telephone system. Then we’d have him
speak through the packet radio system. And it sounded exactly
the same. So we didn’t tell the generals
that everyone would sound that way if they went through
the system. Today is actually a very
important milestone. Today is the 30th anniversary of
the first demonstration of getting all three of the
original networks of the internet to interconnect and
communicate with each other. We took– the packet radio van
was driving up and down the Bayshore Freeway radiating
packets. They were intended to be
delivered to USC Information Sciences Institute in Marina Del
Rey, California, which is just to the west
of Los Angeles. But we jiggered the gateways
so that the routing would actually go from the packet
radio net, through the ARPANET, through an internal
satellite hop down to Kjeller, Norway, then down by landline to
University College London, then out of the ARPANET through
another gateway, up through a satellite ground
station at Goonhilly Downs. And then up through the Intelsat
4 satellite, then down to [? Ekham ?], West
Virginia to another satellite ground station, through another
gateway, and back into the ARPANET, and then all
the way down to USC ISI. So as the crow flies, the
packets were only going 400 hundred miles, from San
Francisco down to Los Angeles. But if you actually measured
where the packet when, it went over 88,000 miles. Because it went through two
satellite hops up and down, and then across the Atlantic
Ocean twice, and across the United States. So we were all pretty excited
about the fact that it actually worked. I don’t know about you, but I’ve
been in the software game for a very long time. Software never works. I mean, it’s just a miracle
whatever it works. So we were leaping up and down,
screaming, it works, it works, as if it couldn’t
possibly have worked. So we celebrated this particular
anniversary a couple of weeks ago at
SRI International. We invited everybody who had
been involved in this particular demonstration. And quite a few people were
able to come back. And we got to renew our
old acquaintances. But that was a very important
milestone today, 30 years ago. Of course, if you look at the
internet in 1999, ninety even today, this is the sort
of thing you see. Highly connected, much larger,
more colorful. And that’s about as much as you
can say about the internet that it’s a pretty accurate
description. It got a heck of a lot bigger
over the 30-year period. Some of the things that have
made the internet successful were fundamental decisions that
Bob Kahn and I and others made at the beginning
of this thing. One thing that we knew is that
we didn’t know what new switching and transmission
technologies would be invented after we had settled on the
design of the internet. And we did not want the
net to be outmoded. We wanted to be future-proof. So we said we don’t want the
internet layer protocol to be very aware of or dependent upon
which technology was used to move packets from one
point to another. We were fond of observing that
all we needed from the underlying transmission system
is the ability to deliver a bag of bits from point A to
point B with some probability greater than zero. That’s all we asked. Everything else was done on an
end to end basis using things TCP or UDP in order to recover
from failures, or to retransmit, or to weed
out duplicates. So we were, I think, very
well-served by making that particular philosophical
decision. But there was something else
that also derived from that decision that I didn’t fully
appreciate until later. The packets not only don’t
care how they are being carried, but they don’t know
what they’re carrying. They’re basically ignorant of
anything except that they’re carrying a bag of bits. The interpretation of what’s in
the packets occurs at the edges of the net, in the
computers that are transmitting and receiving
the data. The consequence of this end to
end principle has been that people can introduce new
applications in the internet environment without having to
change the underlying networks and without having to get
permission from the internet service providers to try
out their new ideas. So when Larry and Sergey started
Google, they didn’t have to get permission from
an ISP in order to try this idea out. Or when Jeff Bezos did Amazon,
or when David Filo and Jerry Yang set up Yahoo! They simply did it. And the same is true
more recently when Skype was created. Nobody had to give any
permission to anyone. Same is true for BitTorrent
and many of the other peer-to-peer applications. Essentially, you get to do what
you want to do because the underlying network, is at
least up until now, very neutral about the
applications. So this end to end principle has
been an important engine for innovation in the internet
in the past. Google and others believe that it should continue
to be an engine of innovation. And it can only do that if the
internet service providers essentially keep their hands off
the applications and just simply carry bits from point A
to point B. This doesn’t mean that an ISP can’t also have
value added applications. That’s not what this means. It just means that the provider
of the underlying transmission, in many cases
broadband transmission, that provider should not take
advantage of carrying the underlying transmission to
interfere with other parties competing at higher layers in
the protocol for applications that might be of interest
to the consumers. Similarly, the consumers, who
believe that they’re buying access to the full internet
everywhere in the world when they acquire broadband access
to the net, have a reason to expect that no matter where they
aim their packets, that the underlying system will
carry them there in a nondiscriminatory way. This does not mean, for example,
that you must treat every single packet on the
internet precisely in identically in the same way. We all understand the
need for control traffic with high priority. We understand the possibility
that some traffic needs low latency. We understand that you may want
to charge more for higher capacity at the edges
of the net. Net neutrality does not mean
that everything is precisely the same. But what it does mean is that
there is no discrimination with regard to whose services
the consumer is trying to get to or who is offering those
services when it comes to traversing the broadband
channels that the ISPs are providing. One other thing about broadband
that’s turning out to be a problem is that, at
least in the United States, it’s an asymmetric service, as
you often can download things faster than you can upload them,
leading to anomalies like you can receive
high-quality video, but you can’t generate it. My general belief is that the
consumers are going to be unsatisfied with this asymmetry
and that there will be pressure to provide for
uniform and symmetric broadband capacity, which is
what you can get in other parts of the world. In Kyoto, you can get a gigabit
per second access to the internet. It’s full duplex, and it costs
8,700 yen a month. It almost made me want to move
to Kyoto, because it just seemed like such a very friendly
environment to try new things out. So we’re very concerned about
the symmetry and neutrality of the underlying network. The other thing I wanted to
point out is that this chart, which was prepared by Geoff
Huston, who’s an engineer in Australia, is intended to
illustrate the utilization of the IP version 4
address space. The important part of this
church is the one that’s trending downward. That part is saying basically
these are the address blocks that the Internet Assigned
Numbers Authority is allocating to the regional
internet registries and that it will run out of IPv4 address
blocks somewhere around the middle of 2010. The regional internet
registries– yours here in Europe
is the RIPE NCC– will presumably hand
out portions of those address blocks. They’re likely to use those
up by the middle of 2011. The implication of this is
there won’t be any more available IPv4 address space. This doesn’t mean that the
network will come to a grinding halt. But what it does mean is there
won’t be any more IPv4 address space and that the only
addresses that will be available for further expansion will be IPv6 addresses. Now, I have to admit that
I’m personally the cause of this problem. Around 1977, there had been a
year’s worth of debate among the various engineers working
on the internet design about how big the address space should
be for this experiment. And one group argued for 32
bits, another for 128 bits, and another for variable
length. Well, the variable length guys
got killed off right away because the programmers said
they didn’t want variable length headers because it was
hard to find all fields and you had to add extra
cycles to do it. And it’s hard enough
to get throughput anyway, so don’t do that. The 128 bit guys were saying,
we’re going to need a lot of address space. And the others guys were
saying, wait a minute. This is an experiment. 32 bits gives you 4.3 billion
terminations. And how many terminations do you
need to do an experiment? Even the Defense Department
wasn’t going to buy 4.3 billion of anything in order
to demonstrate this technology. So they couldn’t make
up their minds. And I was the program manager at
the time spending money on getting this thing going. And finally, I said, OK, you
guys can’t make up your mind. It’s 32 bits. That’s it. We’re done. Let’s go on. Well, if I could redo it, of
course, I’d go back and say, let’s do 128. But at the time, it would
have been silly. We were using full duplex
Echoplex kinds of interactions with timeshared machines
and local terminals across the network. And you can imagine sending one
character with 256 bits of overhead just for the
addressing, it would have been silly. So we ended up with a 32
bit address space. I thought we would demonstrate
the capability of the internet and that we would be convinced,
if it worked, that we should then re-engineer
for production. Well, we never got to
re-engineer it. It just kept growing. So here we are. We’re running out. And we have to use
the new IPv6. By the way, if you’re counting,
and you wonder, OK, IPv4, IPv6, what happened
to IPv5? The answer is it was an
experiment in a different packet format for streaming
audio and video. And it led to a cul de sac
and we abandoned it. And the next available
protocol ID was six. So that’s why we have IPv6. Now, with 128 bits of address
space, you can have up to 3.4 times 10 the 38th unique
terminations. I used to go around saying that
means that every electron in the universe can have its
own web page if it wants to until I got an email from
somebody at CalTech: Dear Dr. Cerf, you jerk, there’s 10 to
the 88th electrons in the universe, and you’re off by
50 orders of magnitude. So I don’t say that anymore. But it is enough address space
to last until after I’m dead, and then it’s somebody
else’s problem. I won’t have time to go through
very many of these, but I wanted to just emphasize
that despite the fact that the internet has been around for
some time now, certainly from the conceptual point of view for
35 years, there are still a whole bunch of research
problems that have not been solved. Let me just pick on a couple of
them that I consider to be major issues. Security is clearly
a huge issue. We have browsers that are
easily Penetrated They download bad Java code and
turn their machines into zombies which become
part of bot nets. You have problems of denial
service attacks. You have the ability to actually
abuse the domain name system and turn some of its
components into amplifiers denial of service attacks, which
adds insult to injury. Multihoming, we haven’t done
a very good job of that in either v4 or v6. Multipath routing, we usually
pick the best path we can, but it’s only one path. If there were multiple paths
between a source and destination, if we could run
traffic on both of them, we’d get higher capacity. We don’t do that. We don’t use broadcast media
at all well in the current internet architecture. When you think about it, we turn
broadcast channels into point to point links. And it’s a terrible waste if
your intent is to deliver the same thing to a large
number of receivers. And that could very well turn
out to be a useful capability, not just for delivering things
like video or audio to a large number of recipients,
but software. People want to download a particular a piece of software. If enough people wanted the same
thing, you could schedule a transmission over a broadcast
channel that would allow everyone to receive it
efficiently, perhaps by satellite, or over a coaxial
cable, or over a cable television network. So we haven’t done any of
those things very well. And we don’t have a lot of
experience with IPv6. And what’s even more important,
we don’t have a lot of experience running two IP
protocols at the same time in the same network, which
is what we are going to have to do. So in order to transition, we
can’t simply throw a switch and say, tomorrow we’re
using IPv6 only. We’re going to have to spend
years running both v4 and v6 at the same time. And we don’t have a lot of
experience with that. When you do two things instead
of one thing, you get more possible complications. The network management systems
may not know what to do when it gets errors from
both v4 and v6. Or worse, it gets errors from
v6 but not from v4. The routing is working for one,
but not for the other. Do I reboot the router or not? What do I do? So there are a wide range of
issues, including a very fundamental problem with IPv6. We don’t have a fully connected
IPv6 network. What we have is islands
of IPv6. This is not the circumstance
we had with the original internet. Every time we added another
network, it was v5, and it connected to an already
connected network. But when we start putting in
v6 and not implementing it uniformly everywhere, then we’re
going to have islands. They could be connected
by tunnels through v4. It’s a very awkward
proposition. Until we have assurance that we
have a fully connected IPv6 network, people are going to be
doing domain name lookups, getting IPv6 addresses, trying
to get to them, and not getting there because you’re in
a part of the internet the doesn’t connect to the other
parts of the internet that are running IPv6. So these are just headaches
that are going to hit. And they’re going to start
hitting in 2008 because of us are going to have to start
getting v6 six into operation before we actually run out
of IPv4 four addresses. Switching to a slightly
different view of the net, there have been some really
surprising social and economic effects on the net that are
becoming more visible. The one that I find the most
dramatic is that the information consumers
are now becoming the information producers. So you can see it in the form
of blogging or Youtube or Google Video uploads, personal
web pages, and other things. People are pushing information
into the network as well as pulling it out. This is unlike any broadcast or
mass medium in the past. In the past, a mass medium had a
small number of information producers and a very
large number of information consumers. The internet inverts all of that
and allows the consumers also to produce content. Wikipedia has taught us another
very interesting thing about this internet
environment. I want you to think about a
paragraph in Wikipedia that you’re reading. And you see one word which
should be changed because you’re an expert in the area
and you know that the statement is wrong or
the sense of the paragraph is wrong. You could certainly make
that one word change. You would never publish a one
word scholarly paper. You wouldn’t publish
a one word book. But you can publish one
word of a change in Wikipedia paragraph. And it’s useful. It’s a contribution
to everyone who looks at that paragraph. So the internet will absorb the
one word change, the one page change, one paper, one
book, one movie, one video. It is willing to absorb
information at all scales and in all formats, as long as
they can be digitized. So the barrier to contribution
into the internet environment is essentially zero. Another phenomenon which is
rapidly evolving is social networking. Many of you may already be using
LinkedIn or Myspace or Facebook or Orkut or
some of the others. That’s a phenomenon that’s going
to continue to grow. Especially young people enjoy
interacting with each other in this new medium. And they show a considerable
amount of creativity in inventing new ways of
interacting with each other. Similarly, game playing. Second Life, World of Warcraft, and a bunch of others. EverQuest is another one. What’s interesting about these
particular environments is really twofold. One of them is that there are
real people making decisions in these games. And some economists at Harvard,
for example, have asked their students to go
become participants in Second Life in order to observe the
kinds of economic decisions that people are making in the
context of these games, because they’re actually trying
out different economic principles within various parts
of the game environment. And so it’s actually in an
experiment that you couldn’t necessarily conduct in the
real world that’s being conducted in this artificial
environment. The other important observation
I would make is that the economics of digital
information are dramatically different from the economics
of paper or other physical media. Just to emphasize this, let me
give you a little story. I bought two terabytes of disk
memory a few months ago for about $600 for use at home. And I remembered buying a 10
megabyte disk drive in 1979 for $1,000. And I got to thinking, what
would have happened if I’d tried to buy a terabyte
of memory in 1979? And when you do the math,
it would have cost me $100 million. I didn’t have $100
million in 1979. And to be honest with
you, I don’t have $100 million now either. But if I’d had $100 million in
1979, I’m pretty sure my wife wouldn’t let me buy $100 million
worth of disk drives. She would have had a better
thing to do with it. The point I want to make,
though, is that that’s a very dramatic drop in the cost
of disk storage. You’re seeing similar kinds of
drops in the cost of moving bits and processing bits
electronically. The business models that you
can build around those economics are very different
from the business models that were built around other media,
paper or other physical media. And companies that built their
businesses around the older economics are going to have to
learn to adapt to the new economics of online, real
time, digital processing transmission and storage. And if they don’t figure out how
to adapt to that, they’ll be subject to Darwinian
principles. This is a very simple principle,
adapt or die. And so if you don’t figure out
how to adapt, the other choice is the only one you have. So a
number of companies are going to be, I would say, challenged
to understand that the economics of digital information
are really demanding them to rethink
their business models. This was a chart that
was generated by a company called Sandvine. They’re doing some deep packet
inspection to understand the behavior of users at the
edge of the network on a particular channel. What they were looking at here
is a variety of applications that are visible as the packets
are traversing back and forth over access lines. What was important here is
that the Youtube traffic represented somewhere between 5%
and 10% of all the traffic that they measured on this
particular access channel. And I bring this up primarily
to say that Youtube is only two years old. So just look at what happened
with an application very recently suddenly blossoming
into a fairly highly demanding application in terms of capacity
on the network. We can easily imagine that other
applications will be invented that may have different
profiles of demand for traffic, either uploading
or downloading. or maybe low latency, or
many other things. So the point here is that the
network is very dynamic. It is constantly changing. New applications are coming
along, making new demands on its capacity. And so this is not stable in
the same sense that the telephone network was stable,
where you could use Erlang formulas to predict how many
lines you needed to keep people from getting a busy
signal, below 1% probability. The internet does not have
stable statistics like that. And because new applications
can be invented simply by writing a new piece of software,
I think we’re not ever going to be able to predict
very well the actual behavior of the net
at the edge. In the core of the net, it’s
a different story, because you’re aggregating a large
number of flows. And you can get fairly stable
statistics for the core of the net, but not at the edge. I’ve been thinking a lot about
how the economics of digital storage and transmission have an
effect on certain kinds of media, like video
in particular. Let’s just take a moment a
couple of observations. 15% percent of all the video
that people watch is real time video. It’s being produced
in real time. It’s a news program. It’s an emergency or maybe
a sporting event. 85% five percent of video that
people watch is actually pre-recorded material. So in this chart, there are
two kinds of video, RT for Real Time video that’s been
generated in real time. And PR for Pre-Recorded video
that’s being transmitted through the network. And imagine now that
we’ve got two axes. One is the transmission
rate that’s available to you as the consumer. And the other is the storage
you have available locally. And the split is that high
transmission rate means it’s sufficiently high to deliver
things in real time and low means you can’t deliver
it in real time. Low storage means there isn’t
enough memory locally to store any reasonable amount
of video. High means there’s enough
storage to store reasonable amounts, which might be measured
in hours of video. So the question is,
which quadrant are you in as the consumer? If you’re in the lower left hand
quadrant, where you can’t transmit in real time, and you
don’t have any place to store it, you’re basically
out of luck. Video is not an interesting
medium for you. If you have very high
transmission rates and no storage available, you can
easily receive the streams in real time, just as you would
over a typical cable or satellite or over the air
transmission system. You can even potentially receive
the pre-recorded material at higher than
real time speeds. But since you don’t have any
place to store them, it doesn’t do you a lot of good. So basically, in the upper
left, you’re stuck with streaming video in real time. In the upper right, it’s
much more interesting. Because here you have high speed
available and you have a lot of storage available. The real time stream could
be delivered and watched in real time. It could be delivered in real
time and stored away and watched later, just
like TiVo or other personal video recorders. But it’s the pre-recorded
stuff that gets really interesting. You could clearly transmit
it in real time. But you can also transmit it
faster than real time because you have a data rate that
exceeds the rate at which the video is normally transmitted
for viewing. The implication of this is that
video on demand no longer means streaming video. It means delivering video
potentially faster than you could watch it. Anybody that uses an iPod today
is experiencing that. Because you’re downloading music
faster than you could listen to it. And then you play it back
at your leisure whenever you want to. It’s my belief that IPTV is
going to be the download and playback style of iPod as long
as the data rates at the edges of the net are sufficiently
high. So what does that mean for
the television industry? And I’d like to use the word
television here to refer to a business model and the word
video to refer to the medium. And so my interest here is
understanding what happens to the video medium and the
business of video when it ends up into an internet
environment. One thing that’s very clear is
that because you packetized everything when you’re
downloading, the data that you’re downloading doesn’t
have to be confined to video and audio. It could easily contain
other information. So when you get a DVD, it has
bonus components on it. It’s bonus videos. It’s textual material. Maybe it’s the biographies of
the actors, or the story of how the movie was made,
or the book that the movie was based on. So when we’re downloading stuff
associated with video and it’s coming through the
internet, we can download all forms of digitized content,
store it away, and then access it later. Among the things that
could be downloaded is advertising material. In the conventional video world
you, interrupt the video for an an advertisement, and you
force it on the users, on the consumers. In the world of internet based
systems, when you’re playing back the recorded content,
there’s a program, a computer, which is interpreting
the traffic and interpreting the data. And so it’s not just a stupid
raster scan device. It can actually make decisions
based on the kind of information that’s
being pulled up. So imagine that you’ve composed
an entertainment video and that you’ve made some
of the objects in the field of view– like maybe this Macintosh is
sitting in the field of view– you’ve made those objects
detectable or sensitive to mousing. So if you mouse over that
particular object, it highlights. And a window pops open. It says, gee, I see you’re
looking at the Macintosh. Let me tell you a little bit
more about that product. Click here if you’d like to find
out whether there are any available at the Apple Store. By the way, do you want to
complete the transaction now? And then go back to watching
the movie. The idea of allowing the users
to mouse around in the field of view of an entertainment
video it is a transforming idea with regard
to advertising. And it feels a little funny, a
computer programmer like me sitting up here, getting excited
about advertising. But remember, that’s where
Google makes all its revenue. So we care a lot about new
styles of advertising that would improve the consumers’
control over what advertising he or she has to
be exposed to. And also, it turns out the
advertisers care a lot about knowing whether the users are
interested in their products. And so knowing that
nobody is– if they’re not interested,
they don’t click. If they are, they do. And that is a big jump up in
understanding something about the potential client for your
products or your services. So my prediction is that video
in the internet environment, where high speed interfaces
are available and lots of storage are available, will be a
transforming opportunity for users to control advertising and
for advertisers to wind up with a much better product
than they have today. I mentioned mobiles before. And I just want to emphasize
that these are programmable devices. These are not just telephones
anymore. Google recently announced an
operating system called Android that we would like to
make available to anyone who’s building these wireless
platforms. | purpose is to open up the platform so that
you can download new applications and allow users to
try out new things without too much difficulty. These things are already
useful for accessing information on the net. Here, especially, in Europe,
they are being used to make payments. This is a challenge, though. I’ve got a BlackBerry here, and
it has a screen that’s the size of a 1928 television set. And the data rates that you can
reach this thing with vary from tens of kilobits a second
to maybe as much as a megabit. And the keyboard is just
great for anybody who’s three inches tall. So these are pretty
limiting devices. But it seems to me that they are
going to be very important because of the prevalence of
these devices, especially in areas where alternative
access to the internet isn’t available. What is interesting about these
is that because you carry them around on your person
or in your purse, they become your information
access method, your information source. And often, you want information
which is relevant to where you are
at the moment. So geographically indexed
information is becoming very, very valuable and very
important, especially as you access it through mobiles. I have a small anecdote to share
which emphasized for me the importance of having access
to geographically indexed information. My family and I went on a
vacation this May in a place called Page, Arizona. It’s adjacent to something
called Lake Powell. We decided to rent
a houseboat. Believe me, if you like steering
things around, don’t rent a houseboat. It steers like a houseboat. Anyway, I was terrified that I
was just going to ricochet my way down the lake. But in any case, the problem is
once you get on the boat, there’s no place to
get any food. So you have to prepare
by buying food and bringing it onboard. So as we were driving into
Lake Powell, we were discussing what meals we
were going to produce. And somebody said, well,
I want to make paella. And I thought, well,
that’s interesting. You need saffron to do that. Where am I going to find saffron
in this little town of Page, Arizona? So fortunately, I got
a good GPRS signal. So I pulled out the
BlackBerry. And I went to Google. And I said, Page, Arizona
grocery store saffron. And up popped a response with
an address, a telephone number, name of the store, and
a little map to show you how to get there. So I clicked on the
telephone number. And, of course, this being a
telephone, it made the call. The phone rang. Somebody answered. And I said, could I please speak
to the spice department? Now, this is a little store. So it’s probably the owner
that, this is the spice department. And I said, do you
have any saffron? He says, I don’t know. He went off. And he came back. He says, yeah, I’ve got some. So we followed the map. This all happening
in real time. We follow the map, drive
into the parking lot. And I ran in and bought $12.99
worth of saffron. That’s 0.06 ounces, in
case you wondered. And we went off on
Lake Powell. We made a really nice paella. What really struck me is that I
was able to get information that was relevant to a specific
need in real time and execute this transaction. It would not have worked– can you imagine going and trying
to find something in the white pages or the yellow
pages at a gas station or what have you? The fact that you can get
information is useful to you in real time is really
quite striking. So I believe that as this mobile
revolution continues to unfold, that geographically
indexed information is going to be extremely valuable. Well, some of you have been
around for a while and have watched the internet grow. I’ve been a little stunned at
some of the devices that are starting to show up on the
network, like internet enabled refrigerators or picture frames
that download images off of web sites and then
cycle through them automatically, or things that
look like telephones, but they’re actually voice
over IP computers. But the guy that really stunned
me is the fellow in the middle. He’s from San Diego. He made an internet
enabled surfboard. I guess he was sitting out on
the water thinking, you know, if I had a laptop in my
surfboard, I could be surfing the internet while I’m waiting
to surf the Pacific Ocean. So he built a laptop into
his surf board. And he put a WiFi service
in the rescue shack back on the beach. And he now sells this
is a product. So if you’re interested in
buying internet enabled surfboard, he’s the
guy to go to. I honestly think that there are
going to be billions of devices on the net, more devices
than there are people. And if you think about the
number of appliances that serve you every day, there
are lots of them. And imagine that they’re
all online. Imagine being able to interact
with them or use intermediary services to interact
with them. So as an example, instead of
having to interact directly with your entertainment systems,
if they were up on the network and accessible that
way, you might interact through a web page on a service
on the network, which then turns around and takes
care of downloading movies that you want to watch or music
that you want to listen to or moving content from
one place to another. All of that could be done
through the internet. In fact, a lot of those devices
have remote controls. And if you’re like me, there
are lots of them. And then you fumble around
trying to figure out which remote control goes
with which box. And after you figure that out,
that’s the remote control with the dead battery. So the idea here is to replace
all those with your mobile, which is internet enabled. So are the devices
in the room. You program them and
interact with them through the internet. So you don’t even have to
be in the same room. Gee, you don’t even have
to be in the house. You could be anywhere in the
world where you could get access to the internet, and
you could control your entertainment systems. Of course, so could the
15-year-old next door. And so you clearly need strong
authentication in order to make sure only the authorized
users of these systems are controlling your entertainment
system, or your heating and ventilation, or your security. All of these things could
easily be online– lots of appliances at home,
appliances in the office, all manageable through the
internet and offering opportunities for third parties
to help manage some of that equipment for you. So this kind of the network
of things is creating opportunities for people
to offer new products and services. I don’t have time to
go through all of these various examples. But there are little scenarios
you can cook up, like in the refrigerator that’s online. If your families are like
American families, the communication medium between
American families is generally paper and magnets on the front
of the refrigerator. And now, if you put up a nice
laptop interface on the front of the refrigerator door, you
can communicate with family members by blogging, by instant
messaging, and by web pages and email. But it gets more interesting
if you imagine that the refrigerator has an RFID
detector inside. And RFID chips are on the
products that you put inside the refrigerator. So now the refrigerator can
know what it has inside. And while you’re at work, it’s
surfing the internet looking for recipes that it knows
it could make with what it has inside. So when you get home, you see a
nice list of things to have for dinner if you like. And you can extrapolate this. You might be on vacation,
and you get an email. It’s from your refrigerator. It says, I don’t know how much
yogurt is left, but you put it in there three weeks ago,
and it’s going to crawl out on its own. Or maybe your mobile goes off. It’s an SMS from your
refrigerator. Don’t forget the
marinara sauce. I have everything else I need
for spaghetti dinner tonight. Now, unfortunately, the Japanese
have spoiled this beautiful scenario. They’ve invented an internet
enabled bathroom scale. When you step on the scale, it
figures out which family member you are based
on your weight. And it sends that information
to the doctor to become part of your medical record. And, of course, that’s OK,
except for one problem. The refrigerator’s on
the same network. So when you come home,
you see diet recipes coming up on the display. Or maybe it just refuses
to open because it knows you’re a diet. I’m going to skip over–
oh, wait a minute. I’m sorry. There’s some important
stuff here. I mentioned earlier IPv6. P And that’s something that
really is a major transformation of the core part
of the internet at the internet protocol layer. There are other things that are
happening in 2007 and now 2008 that are going to have an
impact on all of us who offer various kinds of internet
service. One thing is the introduction
of non-Latin top level domains, internationalized
domain names, that are written in character sets that include
things like Arabic, and Cyrillic, and Hebrew, and
Chinese of various kinds, and Kanji, and Hangul,
and so forth. ICANN has already put up 11
test languages in the top level domains, in the
root zone file. And it’s encouraging people
to go there and to try out interactions with those domain
names using various application software packages,
including browsers and also email to give you a chance to
see how the software will interact with these non-Latin
character domain names. They’re are represented
typically in Unicode. It may be Unicode encoded
in UTF-8, for example. But what’s important is that the
software has to recognize that these are domain names,
even though they’re expressed using strings other than simply
A through Z and zero through nine and a hyphen. The other thing which is going
on is that digital signing of domain name entries. So DNSSEC is a way of allowing
someone who’s doing the domain name lookup to ask for a
digitally signed answer. And when that comes back, you
have some validation that the information in that domain
name entry has not been altered, that it has maintained
integrity from the time it was put in. This is not encrypted
anything. Is simply a question of
digitally signing things to make sure that the information
is as valid as it was when it went in the first place. Those things are all under
way now in the internet. And they may have an impact on
every one of us that are involved in building systems
that run on the internet. I’d like to just quickly go
through a couple of other points here. One of them is that intellectual
property handling in the online environment is
becoming quite a challenge. Digitized information is easy
to copy, and it’s easy to distribute. The philosophy behind copyright
laws, including the Bern Convention that was
developed here in Switzerland, says that physical copies of
things are what we are concerned about. And the difficulty of copying
or the cost of copying physical objects is what has
made that particular law [INAUDIBLE] well. But in the presence of digital
versions of these things, it’s turning out to be much
harder to enforce. It may very well be that we need
to back away and rethink what copyright means in this
online and digital environment. There are alternatives that
have been suggested. Creative Commons is one of them,
celebrating its fifth anniversary this year, that may
find alternative ways of compensating authors or of
letting authors say whether they want to be compensated or
in what way they want to be compensated for their
intellectual property that’s been put into this online
environment. Tim Berners-Lee has worked for
some time and spoken often about the semantic web. This is an idea that allows us
to interact with the content of the network, not just with
strings, but with some notion of the meaning of the strings. That project is still
a work in progress. If in fact it’s possible to
codify or otherwise indicate the meaning of contents on
the net, it would be very beneficial, certainly, from
Google’s point of view. Because today, we tend to
navigate you to a document. But what you really wanted
was answers. And in order to do a better
job of helping you find answers, we need to understand
the semantics of what’s in the net. And right now, we
can’t do that. I’m becoming increasingly
concerned about the nature of the objects that are in
the internet today. Some of them are extremely
complex. There not interpretable
without software. A spreadsheet, for example,
is a dead thing until you actually bring it up in the
spreadsheet program and interact with it. It’s a very complex object
sitting on the net somewhere. You can’t simply print it out. Well, you can, but you get a
very limited representation of the real content, meaning and,
complexity of the objects. So I’m unconcerned that over
time, the contents that we put into the internet will be
dependent on software to interpret what the bits mean. That leads me to my biggest
worry, which I’ll call bit rot. If in fact bits are stored away
over time, and they’re moved from one medium to another
as new storage media come along, what will happen
if we lose access to the software that knows how
to interpret the bits? At that point, you won’t know
what you have other than a bag of bits. And so the question is,
what we do about that? Let me give you a scenario. It’s the year 3000. And you’ve just gone through
a Google search. And yo’ve turned up a PowerPoint
file from 1997. So suppose you’re running
Windows 3000. The question is, does Windows
3000 know how to interpret the 1997 PowerPoint? And the chances are
it does not. And this is not arbitrary
dig at Microsoft. Even if this were open source
software, the probability that you would maintain backward
compatibility for 1,000 years strikes me as being
fairly low. So the question is what to do
if you’re thinking about information that you want to be
accessible 1,000 years from now like vellum documents
are today that are 1,000 years old. We have to start thinking about
how to preserve software that will be able to interpret
the bits that we keep saving. You may even have to go so far
as to save the operating system that knew how to run
the application that could interpret the bits. And maybe even emulate
hardware that ran the operating system that knows
how to run the application that can interpret the bits. There is very, very little
progress made right now in that domain, something
that we should be very concerned about. Otherwise, 1,000 years from now,
historians will wonder what the heck we did in
the 21st century. There will be nothing about
us other than a pile of rotting bits. And that’s all they will know,
is that we were the rotten bit generation. And I’m sure that’s not what we
want them to know about us. OK, last update. And I have a whole bunch of
questions here that people have asked already. And I’ll try to answer
some of them. This project that I’m about
to tell you about is not a Google project. Google lets me have time
to work on it. But I don’t want you to walk out
of this auditorium saying, hah, I’ve figured out what
Google’s business plan is. It’s going to take over
the solar system. That’s not what this is about. This is about supporting the
exploration of the solar system using standardized
communication protocols. Because historically, we have
not standardized these communication systems in
the same way that we’ve standardize communication
in the internet. Now, we all know we’ve
been exploring Mars using robotic equipment. Usually, to communicate with
the spacecraft, we use the Deep Space Network, which
was developed in 1964. These are big 70-meter dishes
in Goldstone, California, Madrid, Spain, and Canberra,
Australia. There are also adjacent to
35-meter antennas as well. Half of one is kind of visible
over the right hand corner of that image. So as the earth rotates, these
big deep space dishes are rotating along and seeing out
into the solar system, able to communicate with spacecraft like
this one, which may be in orbit around a planet or flying
past an asteroid, or in some cases actually landing on
the surface of a planet like the rovers in 2004. One thing that you might not
know is that most of the communication protocols that are
used for these deep space missions are tailored to the
sensor platforms, the sensors are on board the spacecraft
platforms, in order to make most efficient use of the
available communication capacity, which is often
fairly limited. The rovers that went onto Mars
in the beginning of 2004 are still running, which is pretty
amazing, considering their original mission time
was only 90 days. So they’re still in operation,
although one of them– I forget which one– has a
broken wheel, and it’s kind of dragging furrows in
the Martian soil. But they’re still operational. One of the problems that showed
up, though, very early in the rover mission is that the
plan was to transmit data with the high gain antenna. There’s a thing that looks
like a pie tin on the right hand side. That was the high gain antenna
that was supposed to transmit data straight back to the
Deep Space Network from the surface of Mars. When they turned the radios on
and started transmitting, they overheated. And it happened on both
spacecraft, so it was a design problem. So they had to reduce the duty
cycle to avoid having the radio damages itself, which
really drove the principal investigators crazy. because 28
kilobits wasn’t very much to begin with. And now it’s less frequent
transmissions. So the guys at JPL figured
out a way to essentially reconfigure systems so that the
data could be transmitted from the rover up
to an orbiter. And there were four orbiters
available around Mars. They were reprogrammed in order
to take the data up on a 128 kilobit radio, a different
radio, which didn’t have very far to go. So the signal to noise ratio
was high enough to get much higher data rates. Then the data was stored in
the orbiter until it got around to the point where it
could transmit the data to the Deep Space Net. And once again, it could
transmit at 128 kilobits a second, partly because the
orbiters had bigger solar panels and more power available
than the rovers on the surface. So the net result is that all
the information that’s coming back from the rovers is going
through a store and forward system, which, of course, is
the way the internet works. This reconfirmed an idea that
my colleagues at the Jet Propulsion Lab and I have been
pursuing since 1998. And that’s the definition
of protocols to run an interplanetary extension
of the internet. Basically, we assume we’re
running TCP/IP on the surface of the planets and in
the spacecraft. Those are low latency
environments. TCP/IP works very well there. We thought we could get away
with running TCP/IP for the interplanetary part. That idea lasted about a week. It’s pretty obvious what
the problem is. When Earth and Mars are farthest
apart in their orbits, they’re 235 million
miles apart. And it takes 20 minutes one way
at the speed of light for a signal to propagate. And you can imagine how flow
control will work with TCP. You’d say, OK, I’m
out of room now. Stop. The guy at the other end doesn’t
hear you say that for 20 minutes. Of course, he’s transmitting
like crazy. And then packets are falling
all over the place. It doesn’t work. To made matters worse,
there’s this thing called celestial motion. The planets have this nasty
habit of rotating. And so you can imagine you’re
trying to talk to a rover on the surface of Mars. And after a while, it rotates
out of sight and you can’t talk to it until it gets back
around to the other side. So the communication
is disrupted. So we concluded very quickly
that we were faced with a delay and disruption problem and
needed to invent a set of protocols that built that into
its assumptions, which were frankly not part of the internet
assumptions, the TCP/IP protocols. So we developed a set
of protocols. We’ve been going through
tests of them. We’ve got to the point now
where we have publicly available software. It’s all up on the
DTNRG website– Delay and Disruption Tolerant
Networking Research Group– dot org. And that protocol has now been
tested in terrestrial environments. We picked two. The Defense Department, DARPA,
funded the original interplanetary architecture
work. And after we got done and
realized this DTN thing was a serious problem, we realized
that they had a problem in tactical communication, the
same problem, disruption, delay, uncertain delay. So we went back and said, we
think you have a problem. And we think you could use the
DTN protocols for tactical military communication. So they said, OK, prove it. And we said, OK. We went off, and
we built some– these motes that come from
Berkeley, the little Linux operating systems. We built a
bunch of motes and put the DTN protocols on board. And then we went to
the Marine Corps. And we said, OK, we’d like
to test this stuff out. What application would
you like to run? And they said, chat. I said, are you kidding? You’re sitting here with bullets
whizzing by and you’re going like this? And they said yes, because chat
has the nice feature that when you reconnect, all the
exchanges that took place are then given to you
that you missed. And so you get back in sync with
the other people who are part of this communications
environment. So we said, well, OK. So we did that. We implemented it. We went out to Northern Virginia
to a test deployment of these things with
the Marine Corps. And it worked. And so I thought that was pretty
cool, demonstration of DTN working terrestrially,
something useful coming out of all this. And the next thing I knew,
they’d taken all of our stuff to Iraq. And I said, wait a minute,
it’s an experiment. And they said, no it isn’t. And off they went. So we said, all right, fine. Then we thought, well, let’s
try this in a more civilian environment as well. Some of you, I’m sure, are
familiar with the fact that the reindeer herders, the Sami
are in the northern part of Sweden, and Finland, and
Norway, and Russia. And they are pretty isolated,
because they’re so far north. And satellite communication is
a problem because the dishes are, bang, right there
on the horizon. So we said, well, what would
happen if we stuck a laptop with 802.11 in it and
the DTN protocols in all-terrain vehicle? And we put WiFi service
in the villages. So we tried one village. And we tried this random
interaction with the system using DTN. And it worked. That was last year. So next year, we’re going to try
a multiple village test of the DTN protocols to
see whether works. And if it works out well enough,
then maybe we’ll put these things in the snow mobiles
so that it’ll work both during the summer
in the winter. So we’re very happy that we’ve
got terrestrial examples of the use of the DTN protocols. We’re now the point where we’re
ready to start space based testing. 2009, we’re hoping to put the
DTN protocols on board the International Space Station. And in 2011, NASA has offered
to allow us to put the protocols on board the deep
impact spacecraft that already completed its primary, mission
which was to launch a probe into a comet and gather
data back. But it’s still out there, and
it’s still functioning. So somewhere around 2011,
we hope to space qualify the DTN protocols. And after that, Adrian Hooke,
who’s my counterpart now at NASA, is also Chairman of the
Consultative Committee for Space Data Systems. And we hope
to introduce that as a standard protocol for use
in space communication. So what we’re expecting, if
we’re lucky, is that the space agencies around the world will
adopt this is a standard. They’ll use it for every mission
that they launch. And that means that every time
you launch a new mission, any previous mission assets that are
available can become part of the support structure for
the newly launched mission. What will happen as a result
is it will accrete an interplanetary backbone over a
period of decades as more and more of these missions get
launched in the system. So me let me stop there and
thank you again for taking all this time. And we’ll see whether we can
answer a few questions here if that’s OK with you. So let me– these are questions that
apparently were submitted by many of you here. And I’ll answer a few. And then we’ll see if we
get some immediate ones from the floor. First company says, for many
companies, the breakdown of the internet for, say, three
days would lead to a substantial damage. Question, is it possible to
estimate the probability of such a breakdown of the
entire internet? And if so, how could
it be down? And are there any numbers
available? Or maybe how could it be done? I think the answer is I’m
not going to tell you how it could be done. The answer is that the
probability that the entire internet could be taken down
seems to be pretty small. There have been plenty of
opportunities over the past 20 years or so since the 1983
rollout of internet to destroy it in one way or another. And in spite of the fact that
denial service attacks are by far one of the most serious
threats to internet stability, it seems unlikely that
the entire internet would be taken down. I will observe, however, that
we manage to shoot ourselves in the foot fairly regularly
by mis-configuring things. So if you mis-configure the
routing cables of the routers, you can easily damage
significant parts of the internet. And we seem to do that
with more frequency than we would like. But I think on the whole, the
robustness of the system has been pretty substantial. That doesn’t mean we shouldn’t
be introducing increasing amounts of security mechanisms
into the network in order to limit that risk. What’s the biggest fallacy
about the internet? Well, one of them is that
Al Gore invented it. He didn’t invent it,
but he did have something to do with it. The other big fallacy is some
people think the internet happened because a bunch of
local area networks got together one day and
said, let’s build a multi-network system. The fact is that it started with
wide area networks and took a long time. Will there ever be secure web
apps based on a browser alone? Would apps be more secure
if run in applets? Boy, that’s a really
good question. Right now, the most vulnerable
part of the internet world is the browser. Browsers ingest Java code or
other high level codes, and they often are unable to detect
that this code is actually trying to take over
the machine, or install a Trojan horse, or do some
other damaging thing. If we collectively were to
invest anything at all, I think we should be investing in
building much, much smarter browsers that are able to defend
against some of the dangerous downloads. Should the evolution of
the basic protocols such as HTTP go– or where should it go in order
to support more asynchronous interactions of web 2.0? And a related question,
is there too much overhead in HTTP? Well, first of all, I’d say
that we should not depend solely on HTTP as the medium
of interaction on the net. Peer-to-peer applications
are really interesting. And although some of them get
abused for copying and distributing material that’s
copyright, in fact, they often are a very efficient way of
linking people who want to communicate in the network,
either peer-wise or in multiple groups. So my reaction right now to
this one is that we really should be looking at
asynchronous, peer-to-peer kinds of interactions in
addition to the more classical HTTP designs. Can the world wide web help
humanity to become a just, democratic society? Well, the short answer
that is no. Probably not, although to be
really honest and fair, the internet probably is the most
democratic communication system that we’ve ever had,
because it allows so many people to introduce content,
and to share it, and exchange it. But humanity is what it is. Shakespeare keeps telling
us about them. That’s why the plays are
still so interesting. So for humanity to become a just
and democratic society is going to take a fair amount of
adjustment of the human beings who make up that society, not
just the technology that surrounds them. Authentication on the internet
typically means complete identification. How can the privacy of
users be protected? The answer is that we
need to do both. We need to have anonymous
access to the internet. And we also need to have the
ability to do strong authentication. And the reason that you want to
do both is that sometimes it’s important to
be anonymous. We all understand you can abuse
the anonymity, and do bad things, and stalk people,
and say things that are not true. But there are also times when
it’s important to be anonymous in order to allow
whistleblowing. On the other side, there are
transactions that we want to or engage in for which we really
do need to know who the other party is. And so we want strong mechanisms
to allow people to validate each other to
themselves, or to validate the website to you. And the same time, we also have
to support anonymity. And I think we need both. I’ll tell you what. There’s a list of almost 23
three questions here. And I have the feeling that
there are probably people in the audience who would like to
ask some questions that they didn’t ask ahead of time. So let me stop with the
pre-asked questions and ask if there’s anybody who would
like to ask a question live from the floor. There’s a microphone
down here. And if there’s a brave soul who
wants to ask the question, I promise I won’t spit. And if there aren’t any, I’ll
be happy to either go on or maybe run off the stage. Let’s see. When do you think ownership
of the top DNS will be transferred from the US
government to an international organism such as the UN? Well, I’ll be honest and say
I hope it doesn’t get transferred to an international organism such as the UN. I think that transfer of the
ICANN operation to a multilateral organization
would politicize it. I’d point out that ICANN
is a multi-stakeholder organization, which means that
governments, the private sector, the technical community,
the general public have access to the process of
policymaking for the domain name system and internet
address allocation. It would be much better for this
process to end up in a multi-stakeholder structure, not
a multilateral structure like the UN. The Internet Governance Forum
just finished its meetings in Rio de Janeiro. It too is a multi-stakeholder
organization. And the conversations that
take place among those different stakeholders, I think,
are extraordinarily illuminating when it comes to
seeing what the different perspectives are about policy. So I hope that the answer is,
first, it doesn’t end up in a multilateral group, but rather
a multi-stakeholder one. And I would suggest that there
isn’t very much more to be done to extract the US
government the role here. All it does right now in the
Department of Commerce is to validate that ICANN has followed
its procedures to do delegations of top
level domains. And that’s all it does. It’s never forced any
decisions on ICANN. It’s never rejected any
recommendations that ICANN has made. This isn’t to say that people
would like to see the US, many of them, not have this
special relationship. And I hope that sometime,
certainly, in 2008, there’s an opportunity to revisit that
relationship when we review what’s now called the Joint
Project Agreement between the Department of Commerce
and ICANN. OK. Well, let me stop there. And if there aren’t any
questions, I think maybe we can wrap up. I don’t know whether you would
like to say a closing benediction. RANDY KNAFLIC: No. I just would like to say
thanks to Vint Cerf.

Comments

  1. Post
    Author
    tomdatomdatom

    …. oh god… all this free masons start to bore me! they are just everywhere everytime telling their bullshit to the people. As if it's not enough they do it like they were funny! Just poor beings selling the world and loosing themselfs in the end

  2. Post
    Author
    pratt123

    Vint Cerf, man, I have to thank your and Bob Kahn's genius in creating the Internet. You are one of the most important people of the 20th and 21st centuries.

  3. Post
    Author
    andy18401

    Tell me if you agree —- >

    he said: "The problem was the experiment went too far…" — meaning, it gave to
    much freedom, and power to the people.

    Does that not sound like this elitist is working for corporate-government interests,
    not the people — for disempowering the people from free speech.

    He knows, like all of the rogue governments, they have too install a NEW internet for THEM, for which they can better control.

    Please click my thumbs up if you agree — thanks

  4. Post
    Author
    wiredboy27

    @yezidi11 You can't. Computers are based on a lie based on Communism which itself was based on lies about transfinite numbers and set theory by Cantor.

  5. Post
    Author
  6. Post
    Author
  7. Post
    Author
  8. Post
    Author
    Dimitrios Moustos

    When Cerf mentions the internet enabled surfboard all I wanted him to say is: I Cerfed the internet before this guy

  9. Post
    Author
    Quaalude Charlie

    it is good to have older folks who know the Pure History of the Internet explaining this , but the internet is not at all like it was , people want a fun time like the old day's – approaching internet 2.0 without making 1,0 backwards compatible is a Sin , but still we are moving along 🙂 QC

  10. Post
    Author
  11. Post
    Author
  12. Post
    Author
  13. Post
    Author
  14. Post
    Author
  15. Post
    Author
  16. Post
    Author
  17. Post
    Author
    marte thompson

    I am not an engineer, but I predicted way back when I first heard of the internet that as soon as the powers that be got on the internet, they would want to control it, and I was right.

  18. Post
    Author
  19. Post
    Author
  20. Post
    Author
  21. Post
    Author
  22. Post
    Author
    Aaron Lee

    How much MONEY does Google need to make before they hire person WHO CAN PROPERLY MIC SOMEONE?

    That massive cable drape!

  23. Post
    Author
  24. Post
    Author
  25. Post
    Author
  26. Post
    Author
  27. Post
    Author
  28. Post
    Author
  29. Post
    Author
  30. Post
    Author
  31. Post
    Author
  32. Post
    Author
  33. Post
    Author
  34. Post
    Author
  35. Post
    Author
  36. Post
    Author
  37. Post
    Author
  38. Post
    Author
  39. Post
    Author
  40. Post
    Author
  41. Post
    Author
  42. Post
    Author
  43. Post
    Author
  44. Post
    Author
  45. Post
    Author
  46. Post
    Author
  47. Post
    Author
  48. Post
    Author
  49. Post
    Author
  50. Post
    Author
  51. Post
    Author
  52. Post
    Author
  53. Post
    Author
  54. Post
    Author
  55. Post
    Author
  56. Post
    Author
  57. Post
    Author
  58. Post
    Author
    Cyber News

    <iframe width="560" height="315" src="https://www.youtube.com/embed/Hf0rjtnwC9A" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>

  59. Post
    Author
  60. Post
    Author
  61. Post
    Author
  62. Post
    Author
  63. Post
    Author
  64. Post
    Author
  65. Post
    Author
  66. Post
    Author
  67. Post
    Author
  68. Post
    Author
  69. Post
    Author
    W. Banasik

    You are trying to turn upside down Henry David Thoreau's ideals that tell Americans about privilege of the highly educated man of virtues to follow ideals of French revolution. Liberte, Egalite, Fraternite. Individuality is also among his highly respected notions.
    To cut the long story short I treat you Google as one person, the digital one quite interesting, intelligent, neurotic, talented but somewhat aloof and somehow strange persona… you treat me similarly… so now we both cleared the air between us.

  70. Post
    Author
  71. Post
    Author
    KINGDOM HEART

    YouTube muzik YouTube gaming National Geographic film and Facebook gaming and Facebook page on privacy for single akaun personal account no group jus personal account

  72. Post
    Author
    KINGDOM HEART

    Penting di Internet seperti Malaysia Singapura Indonesia di Arab Saudi Cicako Turki di daerah Thailand Korea Pakistan Pakistan Pakistan Japan China Africa Syria useless 93 Selatan Korea Selatan Korea Utara Kampung Sabah Vietnam Philippines Kong Australia Korea Kuching Sarawak Sabah high Court country with Pisces Bangladesh Nepal device dunia fish fillet hospital kiss one box on YouTube app on Vivo

  73. Post
    Author
  74. Post
    Author
  75. Post
    Author
  76. Post
    Author
    kalpana guguloth

    so many as use are this google was some time was correct but its one of the correct and thank you

  77. Post
    Author
  78. Post
    Author
  79. Post
    Author
  80. Post
    Author
    David Shibli

    The Internet is the digital nervous system of the world. The brain will be a quantum computer running AI. Everyone will be connected to the hive mind. This was already predicted in Star Trek with the Borg.

  81. Post
    Author
  82. Post
    Author
  83. Post
    Author
  84. Post
    Author
  85. Post
    Author
  86. Post
    Author
  87. Post
    Author
  88. Post
    Author
  89. Post
    Author
  90. Post
    Author
    Nhung Nguyễn Thị Từ

    Hi ⭐🎥
    Thank all ⭐🎥 "Google thân thiện "
    Em cảm ơn Google và đại gia đình thân yêu.
    Chúc Google và toàn thể đại gia đình thân yêu vui khỏe và hạnh phúc mỗi ngày nhé.
    Thân ái!

  91. Post
    Author
    วิวัฒน ฟันวิลาศ

    ผมไม่เข้าใจคิดอย่างไรแต่ทำไมผมต้องเป็นผู้ถูกประนามจากสังคมด้วยไครจะรู้บ้าง.Wolrd foanvilant.https://www.google.com/appserve/mkt/p/AFnwnKVXzwN86-QtP9BJBt64EVRyuyieyNK11ZwxwD0vhgcSJaOf5SnTE34pFBJkF1Chw8h71Z8Tuo86TcgQ-oUqykg7-CWQLAUwfpe2RBJiPWcjWsHu6fwHa51hd7j6_6GVYJ3WBcZtrTxkltgtdDpIlzo9ypMy17Wv1Oeo0T_vQ6_MQJlU0rma3RPXOaw0GpGFNjOYJlIXCwSOw-f7aNkHmMBllv5CTiJPQ60twitter.com://translate.google.comhttps://www.apple.com/mac/https://accounts.google.com/SignOutOptions?hl=th&continue=https://support.google.com/contacts/apis/onebar%3Fdark%3D0%26hl%3Dth%26key%3Dsupport-content%26request_source%3D1%26service_configuration%3D%26mendel_ids%3D10800170,10800177,10800236,10800243,10800245,10800247,10800253,10800267,10800112

  92. Post
    Author
  93. Post
    Author
  94. Post
    Author
  95. Post
    Author
  96. Post
    Author
    katkat14kk

    How do you spell to find legal issues on Google it seems as though they're no longer there one of them is ogca own disabled persons Act where first-time offender non-violent can be a daily or a live in attendant for elderly or handicapped persons I actually need that code I can barely hobble into a wheel LOL! that's all good but what's not all good is the county I live in want to give me jail time cuz I'm unable to clean my yard also they won't even look at the ogca at Magistrate Court. if I can stop people from dumping it would help but I forgot to mention that there are cameras across the street there be able to tell who's dumping and who's stealing I've called the police that were times they won't even come out and file a report oh that's probably cuz I'm crazy as hell and insane LOL!

  97. Post
    Author
    Estella Trevino

    What do you want to tell us if you're billionaires, and you're better than us. Or you want to tell us that you use as common people to make your money.

  98. Post
    Author
    j g

    can't tell from this angle, but it sounds as tho 'penetration' is sufficient…At least, he seems abundantly satisfying…buy with only 40% penetrating, no wonder it took 8 hours to establish the shot…Especially, when there was absolutely no mouth to mouth…I have to say, that for me, without a little sensual mouth to mouth, I find it difficult to carry out penetration beyond 40%, as well…I guess it makes me feel cheap, even though the cost for my equipment was 40-50,000 dollars a shot…Lol…Gosh! This guy has no idea how he is coming across…He was right, not a good speaker and does not recognize the power associated with language and especially certain words….BTW, I AM DAMN OFFENDed…" Drunken Norwegian". Good thing he didnt say Wetback Mexican or Hood rat jive jigga…But, once again the double standard and accepted white prejudice and subtle joust is completely acceptable…or..Perhaps, whites don't butthurt cry when someone simply expresses lively and subtle satire that os often very funny and effective when seeking to relate to your audience…It shows the human side as we all can relate to that drunken Norwegian at one time or another…In fact, if we are not that very drunken Norwegian referenced, still yet, most probably we have all been drunken with said Norwegian…LOL…But everyone should see what is coming and if not, please consider yourself a fool…Either we shake off this sickening weakness and shallow insecurity which seems to also blend over into the realm of victim hood mentality and selfish and self righteous attributes; or the sign up ahead reads, quickly speed up to avoid the life threatening violence and crippling conditions set about our nation…Of which, will ultimately lead to the dissolution of most all of the world's powerful nations foregoing their sovereignty to the powers and controls of a World Control and Economic guidance, permitting, and licensing…Meanwhile, throwing all of the world's peasants into the new version of serfdom and chiefdom economies and rule, allowing the global elite to openly assume Lordship and rule of the manor as days of old, yet beholden to much more scarcity amongst the lower tiers in society…So go ahead…Give up your guns fools…and give up ur sovereignty and freedom for fiat securities…Fools…

  99. Post
    Author
    Rick 1776-1970

    Global technomarxism under the jewish oligarchy under Talmudic Marxist cult noehide laws and financial strategies.

    JEWS Jews jews J E W Z

  100. Post
    Author
    Rick 1776-1970

    Global technomarxism under the jewish oligarchy under Talmudic Marxist cult noehide laws and financial strategies.

    JEWS Jews jews J E W Z

Leave a Reply

Your email address will not be published. Required fields are marked *