CHI 2019 Keynote – Ivan Poupyrev: Technology Woven In

CHI 2019 Keynote – Ivan Poupyrev: Technology Woven In


– So for last 15 to 20 years, I’ve been working with
physical interfaces, which I look at them as
intrinsic and extrinsic. Intrinsic interfaces is indeed
our feeling of physicality which we can use for design interfaces, and extrinsic is a physical properties which can be used to design interactions. And I think right now,
it’s a very good time to bring this interfaces and
the work that’s been done for many years into practice. This is my first computer
which I started my career with. This computer had eight
Megahertz Intel microprocessor, it didn’t have USB, USB was not invented. And there was no network adapted because you don’t have network, and why do you have network if you have a two floppy drives. So this was computers which barely worked. So thinking about any advanced interfaces
was just premature. Today we are walking around with supercomputer in our pockets. Which has all his own computing and limited memory by all the extent, it’s in The Cloud and they’re
fast and reliable networks. And yet, with all this progress and all this development
happens over last two decades, our interfaces hasn’t changed. Fundamentally, we’re still using mouses and keyboards and click on icons and doing all this Window stuff. And, mobile phones are the same. There’s not really much difference. What we do conceptually the same thing. So I’m kind of disturbed by that. That all this years
with the regular people who using this stuff is
still using something which is done 20, 15 years ago. I’m not the only one obviously. Within this community, we’ve been developing physical interfaces, trying to work in physical
interfaces for a long time. Hiroshi Ishii, somebody who inspired me quite a lot in my work, got Lifetime Research
Awards, so if he’s here- (clapping) Let’s congratulate him again. And, I like Hiroshi Ishii
because he always frames things in very dramatic ways. It’s a battle against Pixel Empire. Now if it’s a battle against Pixel Empire, this is a battle we are losing. Because most of the people have no idea we even doing this kind of work. An average person, I
know as shocking it is, doesn’t know anything about CHI Papers or CHI interactions and
that’s what they’re using. Because there is no really products which embody the work we’ve done. There is no really devices
which people can buy. So we don’t even know it, do they like it, is it gonna work and is this the future? If anything, things
are getting even worse. I just read a statistic that about one day out of a seven days a week, we’re spending online. Not only our interfaces stay the same as a graphical user interfaces, but also our physic life moving away to the virtual world. So it’s getting actually, we’re not going the right direction. We’re going the directions
toward the virtualization and the window into this virtual world, which takes more and more of our life becoming smaller, stays the same, it’s a screen of mobile phone. So how, what we can do about this, and the question is: do we think we come up with
complementary alternatives to the current graphical user interfaces? I’m not saying we’re gonna
remove graphical user interface, I’m going to get rid of the screens, but what complimentary
things we can create? And this is a system
problem, you cannot do it by creating one gadget
here, one gadget there, interaction change here,
interaction change there. We have to create ecosystem. Ecosystem that allows to create a system of applications, use cases which is
apparel to graphical UI. And this is something happening
with voice, for example. We see in a system of voice interfaces and voice interactions being created, which in parallel
graphical user interfaces. But it’s still not interactive, it’s still not physical, sorry. The voice interaction still
relies just on the voice, there is no physical element there. So what we can do for physical world? So in thinking about
creating user interfaces based on physical qualities, I was thinking about the
world of everyday things. Can the world of everyday things become this platform upon which you
can build physical interfaces? And the world of physical
things is amazing. If you look at this, for
example, at this picture, it’s a picture of
everything a person touched during the day. Right? Every single thing. This picture tells a
lot about who you are, tells a lot about what you do. This guy obviously likes motorcycle. Can see these pictures, you know, it’s part of
his motorcycle things. What can say about this girl? She is five years old, she
lives by the beach, obviously. She spent all her time playing in sea. Who is this guy? He’s a chef. You can see all the ingredients
he touched during the day. So computers and computing technology is tiny part of his life, these two things in the corners. So if we are spending all our life, and most of our time interacting
with everyday things, can we use everyday things also to interact with our digital life? And if you take it in a
bigger dramatic terms, can the world be your interface? The entirety of things around you become your interface to digital life, so the computer can go in the
background and be invisible. That was my vision, which I’ve been working
on for last decade, and was thinking how to do this, and have from little
very simple principles. So the goal here is not
to create more gadgets, not to create more things, but make your existing things better by giving them new
usefulness and functionality through connection to your digital life, while at the same time remaining true to their original purpose. And remaining true to their
original purpose is key because we don’t want
to create more gadgets. In a simple ways we can tilt making better things, not gadgets. That’s sort of my vision
which I’ve been working on. I want to talk today
about little more details. Now every vision has a three components: what, why and how. What’s the reason, why it’s important, and how you’re going to execute it. And as with every vision, it’s how, where things
become a bit difficult. You know like world peace, everybody now will even know, everybody knows what the world peace is. If nobody told me in my entire life that you don’t like the world peace, and somehow is still pretty
far away from the world peace. So in my talk, I’m going
to be focusing on how. How we can take the world and try to inject
computation interactivity into things around us. And I came up with three-stage, three-step program to do this, to make this happen. Number one: how to make things
that are also interfaces. Number two: how to make those things that are also interfaces
into the products, because we have to move it to the real, real world for people to use it. And then how we can scale it so you can work across multiple
products in the real world. So I would like to talk
the rest of my talk about these three points, and see where we are
and see how we’re going. So the first step, how can we go from computer interfaces to
things that are interfaces? And this is area where this community creates a lot of contributions. I, myself, as part of this community, so I’m going to talk about two projects which can demonstrate
what kind of technology we can create in order to create, take things and make them interfaces. So before I was working on Google, before I came to Google, I worked in Walt Disney Imagineering, and you know, it’s happiest
place on earth, obviously. And while as happiest place on earth, it’s also very expensive to run. So the idea, they can take existing things in the Disneyland or Disney World, and turn them into the interface and make something different
was very attractive. So this is a project to share, which I developed together
with Sato, Harrison, and team, which takes a wire connect
to existing object, such as doorknob, and turns
it and becomes interactive. It can recognize your gestures,
how you’re touching it. So the object does not,
does not have to be modified in order to become interactive. This is critical because we cannot, we cannot replace every
single doorknob in Disneyland to make it interactive. So the way how it works,
let’s make sure it works. So it’s based on capacitive sensing, an idea when the capacitive sensing, you injecting a single
frequency in objects. That’s how normal capacity sensing work, to create capacitive coupling. What happens, though,
is that if you create a, if you create multiple, multiple
very different frequencies, it will take different parts of your body. So you take off, if you
change the path to the ground, for example, we take off your leg, for example, you are a pirate, (laughing) then we’re going to happen that some of the some of the signals would not go through your leg. Well, wooden leg, to the ground, and you’ll see attenuation of the signal at that particular frequency, which is correspond to this path. What you’re going to get
is this bunch of waves, which represent a different
path a signal takes through the ground, through the object, and they’re using basic machine learning, you can recognize what
specific gesture you doing. Now beauty about this thing is that you can apply to anything. This, for example, can recognize how you’re touching the water with electrodes under the
ground under the fish tank. It shows you have one
finger on the bottom, or the whole hand. And again you can’t modify the water, it’s not a special water,
it’s water from the tap. It can only be used for the body, we can know how you’re touching it, so you can fold your hands. So if you want to stop music, for example, playing the radio we hate, you just close your ear
and you’re gonna stop it. So your body can become an interface, your water can become interface, also plants can become interface. Now plants was very interesting, because there’s a lot
of plants in Disneyland, when the plants physiology allows you to understand what
exactly you’re touching, so you see the line moving up and down, so we can track position of the hand. (techno music notes) So you can turn into the
musical interface for example. Play a little composition. (techno music notes) You can play with this for hours. Or it can also be, track proximity, so as you’re approaching your hands, it can react stronger or weaker, depending on where you’re
touching the plant. [High Pitched Musical Notes] So the interesting thing is a sensor can recognize specific plants. Every object with this sensor has it’s own specific signature, so we can also recognize which object the sensor is connected to. So every plant can be recognized, and now you can create
characters for the plant. So every plant has its own behavior. For example, this is a video,
(drum sounds and clanging) this is an installation
that we did in the park, where we created a drum plant. This kid loved it, he was just keep, keep There was a big leaf, and
every time he hit on a leaf, it create a little bit of drum sound. There’s a bit of a delay there, but people get it. (drum sounds and clanging) And surprisingly kids
that really loved it. I mean the people who loved it
most were actually children, and less of the adults. So this kid keep coming back and forth. (clanging) – Should be up on a pedestal.
– There’s an adult come in. – [Man In Striped Shirt] And
we should up the sensitivity. – Alright. (laughing) – Interactive plant, been
there, done that, you know. So, hard to impress, hard to impress, What can I say, the standards
are really getting high. Mickey Mouse loved it, so he signed off. So there was a big success. So people often ask me, what’s the practical application
of interactive plants? I mean tons of practical applications. For example, we can make a calendar plant which can control a calendar. (laughing) The Enterprise Edition. (laughing) For the stressed-out managers. Or, we can make, replace
the button in the elevator with a plant with the cactus, right? (laughing) People talking about you know, the people don’t exercise enough? Well let them walk up down stairs, or short themselves if they
want to call the elevator. So a lot of good ideas,
a lot of good ideas. (laughing) Anyway, so this is a
project I’ve done at, at when I was working at Disney. When I moved to Google, as I thinking more about
intrinsic qualities of the physical interactions. And I was looking at the kind of gestures, how we using physical
body interact with things, and Kinect was there of
course, this brilliant product. But it’s kind of hard to put
them into everyday things, into things you use every day. So how would you create
a sensor which can be, which can be part of
the, one second, here, which can be part of everyday objects? So something tiny enough
and go everywhere else, so everything can be aware and
can understand your gestures, what you are doing, and react to that. This all became project Soli. Project Soli, which has started at Google and it was a paper publishing
the graphs in 2016. So with Soli, without looking at the completely new modality
for sensing which is radars. And radars have amazing properties. Radars are, can be very
small, theoretically, and they worked through materials, you can hide them in the textile, on the plastic it goes right through. It works in day and night, not dependent on the illumination, configured tracking and other cool things, a lot of good things. The only problem that when we started, they were kind of huge. There was no radar small enough
to put inside of a device. So now fortunately we didn’t have to start with them big antennas. We started with this kind of device, which is kind of, kind of still big. And within about 12 to 10 months, through the variety of iterations, that’s all the prototypes we built, we were able to shrink
it to size of the chip working with a, our partners as one of the SIMCA data companies, this is actual proportion is there. So it’s FMCW radar which
works a 60 gigahertz and emit signals using two
TX antenna, transmit antenna, and four RX antennas,
which receive signals, and they can interpret that try to understand what’s going on. This is how radar works,
basic principle that emits signal and then
capture the reflection and get one dimensional
wave which represents the proposition for all
reflection from the hand. As your hand moves, this
sort of wave is gonna change, and we can then capture those changes, and then using machine learning, understand what these motions are. And that’s a basic fundamental
principle behind Soli, which is different from traditional radars which usually using spatial resolution. We’re looking at temporal
behavior of the radars. So this is how range-doppler looks like, how the, how signal looks like. Interpreting signals from the
radar was very challenging, because it doesn’t have a natural physical
interpretation we can look at, and understand what’s going on. These are four gestures across five users, but even though it’s
difficult to understand, this is range-doppler,
even though difficult to understand you can
see that’s how similar they are across all the users, and how different they are
across different gestures. So you can see intuitively should be able to pull them apart and
make it and make them work. In parallel, as you will
build in technology, we were trying to understand what kind of interaction language you can
design for the for the radar which is based on motion and not on static gestures like a V-sign. This our first prototype we
built for interaction design. Very low-key. And after certain carving explorations, we centered on this idea that we want to represent motions you do when you operate physical controls. Like you’re moving a
slider, or turning the knob, turning the key, or using your phone, in one hand with one thumb. These are universal gestures, they’re much more universal than just like okay or V-sign because these
are culturally significant. The technologies universal across all the countries in all the nations. So how does it look like
something like this, so you have a, a hand, and this motion will represent
motions of the touch panel. So you can scroll like this way, or that could be, a dial. So this kind of gesture
can represent dial. So very easy to explain, very easy to visualize and understand. And this sort of gestures we look in. The beauty about those gestures, is that they are going,
they’re based on motion and not based on static, static postures. And the button. So this is what our, our basic idea, this is our visualizations,
how it could work with, with actual devices. This is concept video, which try to understand how it would work. This is how it could
work, work on the watch. The beauty about the radar is because different positions of your hand can represent different functionalities, you can move it up and down. But basic idea, of
course, is that your hand become a user interface,
that’s all you need. Don’t need button, don’t need sliders, your hand is the only thing with you that you need to control everything, and then everything can be controlled with these basic gestures
attached to your hand. So this is actually working versions that we demonstrated at Google
I/O about three years ago. You can see how sensitive it is, how the hand moves, really
tiny gestures can be tracked. So you can build simple interfaces, you can slide through,
through the, through the, through the menu, and it’s
kind of accelerates and stops. Or you can use a space around the object to control different things. You control now hours, now
you can control minutes by moving your hand high or low. So this opens, this currently
suggests what kind of language you can build with intrinsic
physical interfaces around the radar. Of course, it can also do simple games such as, you know soccer by doing this simple flick gesture. Now the biggest problem we have after we built the first version is that our sensor was
taking a lot of power, and the legal told us to put this sticker on the device: hot surface. So we had to spend quite a bit of time to shrink the sensor and
improve power consumption. So the second version of the sensor, which we showed about two years ago, was 22 times more
efficient and it was able to fit into was much smaller form factor. So it was 0.5 55 milli
watt power consumptions from 1.2 watt. And with that, we’re able
to build into the watch. This is the first prototype when it goes inside of the plastic. You can see radar works through the plastic so you can do things. This is the same demos running on this first version of the watch. And then we do the first
six second versions where we build more kind
of attractive SmartWatch, and that was the one we have. We designed a whole new interface
based on spatial paradigm. So the space on the right side is sort of proximity gesture sense and moving your hand there with
control level of details on macro level of the interface. While if you move into the gesture space you can do the gesture, and
can interact with the proper, with the controls. So let me show how it worked. This a real watch working
with our batteries, as approaching this it selects, the line shows how far
you’re from the device, and once you look there, now
you can control the gestures, you can scroll through the elements and you touch fingers
together to select element, and then scroll through that. Bring it up again, go back. You can control it, and then you can move away and it goes back to,
back to neutral state. So these are early versions of how we can use radar to create this very simple gesture interactions for in a very small form factor. Now, there’s a lot of work
we’re doing in this community, actually, creating a lot of amazing work which can be plugging into this vision of making things interactive. But a second challenge is something which is just as challenging,
if it’s not more, is how can we take those
research prototypes and go from research prototype to product? And this is very challenging. First question we ask yourself: Who is going to do this? Who is going to build products based on those, on those technology? Who can make things, everyday things, embedded with technology? And the problem there is that
the world of things is huge. Hundred fifty billion
garments, for example, made every year by the
apparel industry worldwide. In comparison, only 1.4 billion phones is made by consumer electronic
industry worldwide. It’s a hundred times bigger market, so it’s illusion to think
that we, as technologists, we as technology companies
or technology community, can create this 150
billion dollar garment, 150 billion garment market of things, and it’s just garments,
let’s add furniture and everything else,
it’s, it’s impossible. So the approach which we took in my team is that we have to create technology which allows makers of everyday things to become makers of smart things. We have to transition those
industries from kind of like pre-industrial revolution
stage where they are now, into the modern days, and help
them to do this rather than trying to do it ourself. And for that we should treat
technology as a raw material. It has to behave, act,
and look like materials used by the people who
make this technology, we making, making these things. And we formulate a very simple challenge: Can tailor make a wearable? As simple as that. So the, in order for the
tailor to make a wearable, the things has to be fit into
the tailors workflow, right? So the touch pad, for
example, made for the tailor, should look like a textile touch pad. It’s not going to be electronic touch pad, the tailor is not gonna
become electrical engineer. So how can we make it? And that has to be made
in a way so a tailor can use it in his or her process. And said that became Project Jacquard. Project Jacquard which
tried to create this, this whole flow. Sometimes people think that oh, Project Jacquard is about yarn. Yarn is not a tiny component. Project Jacquard is about turning the industry in becoming smart. So to do this, we started
from the beginning. We went for the factory in
Japan which making yarns. This is a factory. We work with the, not
engineers, but with artisan. This is a person who runs the factory. And an artist who knows how
to make things beautiful. So this is our team which
start working the yarns. Working together, we created, and we use the same
machines to make those yarns which being used to create yarns for kimonos for generations. So working together, we
created a new conductive yarn, which I believe still one
of the best in the world. It can come in multiple
colors, multiple thicknesses, can be silk, cotton everything else, it’s consists a very thin
alloys wires in the middle, wrapped around, braided
around with polyester silk, something else. Now we can take this yarns and then go up in the supply chain, and give it to the
people who make textiles. (machine noises) There’s a factory in Japan which we can transition the knowledge of how to weave tactile touch panels. And once you teach them how to do this, they can do anything they want. We don’t have to explain them how to use for different types of textiles. You see our textile being woven, two-dimensional. And then once explained, they
can do anything they want. This is the creative variety
of different textile, this is a regular textile, different shape of touch
panels can be done. You can see the dark area in the bottom, it’s kind of integration on the design, so it’s invisible within this
kind of contemporary design. We did a transp-Organza type of textiles where it’s, it’s transparent. And we make, we can also
make it completely invisible, so you don’t even see the
tactile pattern touch panels. So all this possible to do
without us being involved, we just let them do and go wild. And after we come up
with all the textiles, then we took it to the Savile
Row in London to tailor, and then ask them what can you do with
this kind of textiles? How can you build a garment
which integrates those textiles? (calm orchestra music) Now the interesting thing
about tailors in Savile Row, and tailors and general,
they don’t use computers. That’s why I went to tailors. They use chalk to draw,
everything done by hand, they cut it with scissors,
that’s why I probably pay a premium price if
you buy the jacket there. So the everything we give them have to fit in their process and done
by hands, not machines. You should be able to draw this, and then cut it with scissors, you should be able to iron it as they’re working on the, on the garment. If things don’t fit in their process, they’re simply not going to use it. So you fit the textiles on their body, not on through the avatars,
they don’t have studio avatars, they don’t have 3D modeling machines. They’re using needles
and to sew, and, and, and put things together. And for them the look of the jacket is the most important thing. Technology cannot interfere with a design. It has to be very simple to assemble, as we simple click, it came
attached, put into the garment. And then, have to look and
feel like a real jacket. They make a gesture, I
can make a phone call. (phone ringing) So we proved with this experiment that a tailor can make a wearable if the technology is
treated as a raw material, and act as a raw material
in their work process. And our next step was to
go and take this technology to a proper company, one second, to Levi’s. And the way we think
about is that, techno, and that’s how we express it here. Project Jacquard is lot like cotton, it’s part of their process. And working with the Levi’s
we created a new garment which is a, with the jacket for the cyclists. And the jacket I’m wearing right now, and I’ve been using to
control my presentation, a little bit of lag here so
sometimes I’m using clicker, but I’ve be using one from my sleeve, I’m using to control my presentation. So this is a jacket done for a cyclist, so a cyclist cycle through the city, it can now control the
music, know who is calling, and you know what’s
happening, navigations, details, drop the pin, and all variety of sort of notifications
can come to the, to the, to the jacket. This is a commercial product
which we, which we produced. And people often focus on the jacket, but actually jacket
consists of four components, and this is four components
you need to create if you actually want to make this world of things being interactive. Of course the jacket itself is a piece of electronic
Jacquard targets here which control has a battery
and control the garment. Jacquard app and Jacquard app
services live in the Cloud. Let me quickly go through them one by one to show what have to be done
to, in order to do this. So in case of Jacquard jacket itself, there is a woven touch
panel introduced here, this is on, it’s here on the jacket. (rhythmic music) And it was produced on a factory, the same factory which makes Jacquard, their regular products. You can see it here. So it’s not anymore a tailor, it’s machines and the proper equipment, and has to fit in their process
and we spend a lot of time. For example, it has to be
burned with open flames as it goes through the
weaving, it’s called sintering. This how our panels coming out, this is (mumbles). And it’s cut usually in batches, they do, they cut in
batches and then sew it, and using the same people who much, who making jackets making our jacket. And then you have to
wash, and that’s a wash to remove the starch
from the from the fabric. There’s a testing, so you have
to put QA and QC processes, so that the technology works. There’s the final jacket. So, the goal there was, of Project Jacquard, was always to create something
which Levi’s can do, not us. So we don’t want to
become makers of jackets. We are, we know Google,
is not, is not a company. So the second part was Jacquard tag, a piece of electronic here. You have to have some electronics and have to probably separated from the garments for a
while, I’ll explain you why. So the idea, and, and the model of this, of this, of this tag, was to represent a strap on a coat, it has to feel like it’s a
real part of the clothing, rather than external element. So the design language is
very important for the Levi’s, when you work with designers. They want to create a garment. So that’s how it looks normally, there’s a LED in the
bottom and even open up, there is a haptic feedback in the battery, and there’s a click the
button we click it in. It’s also for the device is very important that it feels like a device button, because button is one of the iconic things in their garments, so they really wanted to have a button. So that’s how it looks like. The second thing was a Jacquard app. Jaquard app is used to
configure your jacket to do things you want. Obviously, we don’t
want to create a jacket which just does one thing. It’s not exciting, it’s not interesting. So we create app which allows you to configure the jacket to do what you want. And to do that, we created
a simple visual language which is, while being simple, actually turns out to be very complex. So we created a set of virtual controllers on the left side which represent gestures you can be, which jackets support, and notifications tab Jacquard
supports, including haptics, and then we create a set
of so-called abilities, or you know in Echo it’s called skills, in our case it’s called abilities, in a small micro apps that you can assign to those virtual controllers. And you do this by
physically dragging them from one place and dropping
them on the controls. And once you assign them, this particular gesture
became, became activated, and can use to control
one of the functionality. So there’s a lot of complexity, even though it’s simple language, there’s a lot, a lot of complexity around different, different edge cases. Okay. And then finally, there’s a Cloud. Cloud is Jacquard services which support, support the jacket, and
support this whole thing. And, the jacket was launched not in, not in consumer electronics store. Jacket was launched in
a regular clothing store which brings a whole
bunch of other challenges. Which is how we trained sales associates, how they know how to sell the
jacket was a customer support, and so on so forth, but
this is the first apparel, smart apparel perk, which is treated from
the beginning to the end as a piece of garment, and not a piece of technology, and that was the goal of Project Jacquard. Let me show you video
which Levi’s produced when they launched the product
which shows how it works. (bicycle chain rustling) (paper rustling) (click) (lo-fi hiphop music) (chimes)
– Call from unknown. (chimes) Getting next direction,
turn left onto 15th Avenue. (upbeat lo-fi hip hop music) (camera clicks) Text from Emily, I’m running late, see you at the shop. You are five minutes away, and will reach your
destination by 7:05 a.m. So it was the first product we did, and the challenge there is
if, it’s a one product. There’s a hundred fifty billion garment being sold every year, and this is where we are now. There’s a dot, but you cannot see it. You know if you look really close, there’s tiny dot which represent Jacquard. So, and there’s a third challenge, which we’re working on right now, there’s our next step is, how can we go from a single product, from one device, this one jacket, something we learned really, really well, so we, because we went all the way from making yarns to actually
take into the store and selling it as a piece of apparel, how we can go from that one product to a world of connective things? What’s the, what’s the path? And I wanted to talk about sort of, we’re now at execution point,
we’re doing this right now, I’d like to talk about the vision, how we imagine it’s gonna happen. And I’m going to remind
you that the goal there, is to make existing things better, not turn them into the gadgets, right? So we have to work with
people who make things, and make sure that they
can use our technology like a raw material rather than a gadget they just attach this stuff. And that means, if you
try to think this way, that every product we try to create will have a different types of sensors, actuators, or displays which are specific for the purpose of that
particular product. It’s not gonna be the same. For example, a running shoe
would not have a touch panel, why running shoe have a touch panel? It makes no sense. Instead running shoe will have a sensors which allows to measure running speed, or knee impact, or something else. So every product has its
own sensors and actuators. That means that makers of things, and makers who are people
who making those things, should start thinking about services as much as they think about things. It’s not just design of things, it’s those kind of services
these products provide. Which means the current will
become service providers, and we have to start the build the service ecosystem for things. The same way we have a service ecosystem for mobile phones right now. You can have a mobile phone, and you have all the apps and
service, and everything else, and sometimes you’re
still making a phone call. So the same thing has
to be done for things if you want it to be successful. And the question becomes, in order to, so we have to avoid fragmentation. The most important thing is to make sure that the interface and the user experience across all those things is consistent, and the same across all the things. So how can we do this? They have to be one
computational platform, then unite them all, and probably
the same user experience, and obviously this platform
has to run on Cloud. And now it became possible. Until relatively recently,
that was simply impossible to run stuff in The
Cloud, it was not there. But today would have opportunity to create a true Cloud,
using true Cloud computing, not just for storage, but
run computing there as well. So they all have to be
connected to the things, but we have to have a, so we have to have a single device which allow us to connect to those things. So they have to be single,
single piece of electronics which can be shared across those things, and build this bridge
from the physical world to the virtual world. And I have it here. So this is what we’re building right now. It’s a small tiny computer which is up, show the picture, I have it here in my
pocket, I’ll show you. This is it, pretty tiny. This is first prototype which we build. A first prototype which
right now we’re testing, in a small computer and you see, on the back it has a 6 connectors
on the back of the things. This project went through certifications, multiple countries, is go
through all the sort of testing, durability and water ingress, all, all the important
things you need to do in order to ship the product. And the idea here is it can go from product, to product, to product.
You can plug it in a shoe, you can plug it in a bag, plug it in a jacket. And as you moving from product to product, the product would reconfigure itself to the functionality that
this product support. So be dynamically adapting across the ecosystem of the product, but the computing element stays the same. And what we plan to do is to give this computer
to the makers of things, to brands, to companies who have been producing those everyday things, and they would be using it just like any raw material in addition to sensors, which also could become like raw material, like they’re using buttons, and zippers, and, and, and all other
pieces of materials they put in the product. And what they do is up to them. I think it’s very important that we don’t want to
create all the use cases. We can’t create all the use cases for everybody who build products. For example, a company
who is making backpack, can create a navigation applications, which can navigate you
through the streets. A company’s making working jacket, can track your health as you, making sure you stand up and don’t
sit for too long enough. A exercise company can track how well you’re performing your yoga exercises, and an evening jacket can make sure you don’t leave your phone
behind when you leave a taxi. These are all small,
small, tiny applications and you would never be able to build them if you have to build entire electronics by yourself for every those applications. But if you’re using a common platform, all run by the same title
by the same background, back infrastructure, all
these tiny applications becomes feasible and
possible, and that’s the goal. So with that what we can do, so fundamentally what we’re creating, a tiny ubiquitous computer which allows to weave
computing, intelligence, and interactivity into
your everyday things. To realize the ubiquitous computing vision of technology woven in,
and that’s end my talk. Thank you very much. (clapping)

Leave a Reply

Your email address will not be published. Required fields are marked *