re:publica 2019 | tl;dr | Stage 1 – Day 2 – ENGLISH


[Re:publica theme music]. [Re:publica theme music].
>>[German spoken]. TRANSLATION: My name is
Pretorius, and I will be your emcee for for Stage
1 today, part of a big team of moderators. If you have any
issues, please hit me up. I have a smart note pad here, lots of notes where I can send you
directly to the side events, and I have the honour to announce
the great sessions here today. Some organising remarks that I
want you to know about: there is a
so-called “ask me anything”. We have a great international
programme again this year, and our international speakers have
a separate international corner, international space space. They
appear in the community garden, and, for example, at 1230 today,
until 130 pm, there will be an Ask Me
Anything, and, after 1. 45, there will be – Cory will be
here together in the booth in the main hall, and then an
Ask Me Anything with Berndt Tony, and
Alexis Hope later. Two other changes, one other small change. Here, Stage 1, with 1324, we had
announced Fairness in a Digital Society. That has been moved to tomorrow,
1230 and, instead here, at 1342, we
will – at 1345, we will have the
algorithmic boss. You’re here now for Torben
Lütjen, critical scientist at Vanderbilt
University, and he will talk about populism as anti-author
tear young revolt. Give it up for to be Torben
Lütjen. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. – I’m happy to be here, my first
time at re:publica. A few people have already shown up, even though it’s very early,
especially for Berlin! Still you came. Today, we’re going to talk
about something which you could say we’ve been talking all the
time about, and without any pauses, namely populism, so
you could think maybe everything has been said but not by
everyone, and that’s why I can be here today. I will try to give a maybe, or
maybe not, probably diverging opinion about populism, and you can see the
the core themes in the slide behind me. We can talk about what already
exists, basically as a political science consensus, what populism
means, so there is a consensus that it’s not an
ideology such as socialism or
Conservatism, or liberalism, or fascism. There is no canon of collected
works, for example, like Marx with
socialism, or John Locke for Liberals. That doesn’t exist. Populism does not have this this
goal of explaining everything. So, what is the big goal of
populism? If you try to find out the basic thoughts, you wouldn’t
really find anything. Maybe one day, they want all of
the columns of the newspapers to be
built as a canon. It is a rhetorical means that
you can use so maybe you could call it a
pre-ideological reality whose core is very simple and
efficient. And that is a division of
society. On the one side, the honest,
hard-working upstanding folk, and, on the other side, the
decadent, lying, corrupt elite. It doesn’t have to have this
this populist – sorry, yes, this this
folk dimension, but it’s always about dividing society. The
other thing is that it is always the idea of a unified popular
will. That’s one popular will that you can find, and that
exists. And it’s being twisted and
warped by the left. 0of course, this can only exist
on the desks of Conservative political scientists, whereas, in reality,
society is pluralist and diverse. That is the core of
anti-populism. There are many movements that try to counter
it. And And that is also why
populism itself is very susceptible to to conspiracy theories. So, for
example, if you’re an American President, you would get 47 per
cent of the votes, of course, something has to be wrong, so, of course the
majority ratios has been changed somehow. So this is enough for the
populism to create conspiracy theories. All this is talked a lot, it’s
very much accepted. Where I wanted to talk back is
where populism gets another adjective as well which is the “authoritarian”
populism, or the authoritarian wave that is going through
Western democracy. Many people think that we can divide up the world into left and right,
so anti-authoritarianism and authoritarianism, and one of the
best books about the new right is
called The Authoritarian Revolt. I don’t think so. I think it’s an
anti-authoritarian revolt, and my thesis is the populists of today are the children Feof a perverted form of enlightenment. In the next 30 minutes, I want
to explain what I mean by that.
First of all, it makes sense to think about what are the terms we’re
talking about? I don’t want to bore you with
long theoretical explanations, but you should know that, like the phrase ”
phrase “authoritarian” comes from the
1960s, Darnell, one of the most famous authors about this. Others are not as famous, and
the term term “authoritarian” is not political by default. It’s about this prepolitical
self-image. It’s very psychological, and
there are three things that mark
authoritarianism. Conventionism, so, do not get
out of line; second, like looking at
strong father figures, and the the defeating the foreign unand
unknown. and unknown. There are things
that you can easily recognise. In America, there’s a good
saying, “Give the devil its due”, and I
have to give the devil its due here, and say there are a lot of
right things right here. Right-wing populism is strong in Eastern Europe through through
long years of populist dictators where they gained ground, and a few asked
the AFD if they want a strong leader, then you get strong
statistical response, but in the US, as, for example, if you look
at Democrats and Republicans, they educate their kids
differently. Republicans go for bodily punishment. If you ask them if they like
creativity more or respect more, then they
say they like respect more. And these things are true, but I
don’t think they’re the complete truth. So there’s a strong stream
against this that clashes with authority, and
so I want to tell you about this with a story, a story from a world that
wasn’t taken seriously before 2016, and
that is the world of the alt-right movement in the US, a new and aggressive
form, a nationalistic form of American
American conservatism. We have a huge metaphor of “red
pilling”, meaning giving someone the red pill which means opening
their eyes. They will learn truths that they
didn’t know before, and the truths that they should learn is
that liberalism is based on a lie and should convert them
to the ideas of their own movement. A few of you might
have thought this is a pop cultural leaning from a a an important for him out of 1999 –
the Matrix, perfect virtual realities so also dystopian, so
is it fits very well with the topic of re:publica –
and because what you can see in that movie is so impressive, this is it, I
brought a little snippet from that move which, which is good
for the translators, because I’m very sorry that I don’t take any
breaks, but now we have a break. break. [Video] you were a slave, Neo
Neo, born into a bondage, born into a
prison for your mind that you can’t see, smell, taste, or
touch. Unfortunately, nobody can be
told what the Matrix is. You have to see it for yourself. This is your last chance. After
this, there is no turning back. You take the blue pill, the
story ends, you wake up in your bed and believe whatever you
want to believe. You take the red pill: you stay
in Wonderland, and I show you how deep the rabbit hole goes. goes. Remember, all I’m offering is
the truth. Nothing more. Follow me. – He swallows the red pill. The
blue pill would have meant to forget all your troubles and
stay in the Matrix. If you’ve seen the movies, I would probably take the blue pill
because the the order is getting destroyed by submarine robots,
but people take the red pill because they want to know the
truth. The interesting is, when that movie came out in 1999, at first first, it was talked about in
this way. It was interpreted as Marxist,
and of markist enlightenment. Many people don’t know that they
live in a class society, and the populists is being ruled by the the – if as a
comrade, you take the red pill, you know how the actual
relationships are between you and the populist. This was very much accepted even
in the transgender community as an idea
of, like, enlightenment, showing that the world could be better.
These days, it’s more that. This is one of many memes, as
they’re called, that are flying through
the huge spaces of the alt-right where the red pill is the symbol for if you
swallow it, I have converted you, because conversion is a very important
motif – motive in these societies,
conversion to conservativism. In the alt-right movement,
there’s a whole Cosmos of narrative where the liberal
society is talked about as a cathedral, and people are
preaching values like multi-culturalism,
equality, but don’t even know the that they’re in a church,
and thinking about that in this way, this is very unusual
for a a conservative American movement to present your enemy
as a church. Let’s move away from the the US. If you can take these images,
you can – this is the idea of responsible citizens that the I
ISIF presents, and this is the tip of the iceberg, and this is
the the image of the right-wing populists of the
the – like sheep following the herd and there looking from the
outside and they’ve realised the truth, the
responsible citizen that is is talking about emancipation, and the
empowerment, and, in other countries, there, in
other countries, there are other terms like “emancipation”, and
that would be the cathedral, for example. In the Netherlands, the
people are talking about emancipation, so
people are taking themselves seriously. You don’t have to buy it
yourself, but right-wing populist voters have this
self-image. What I’m talking about here is populism appeals to people who think
they’re very competent and ready to understand the world in all its complexity, and
they don’t need mediators. They don’t need mediators. They can directly dig through to
the the truth, so, of course, this
is, like, of course you have to take this as a provocation, but
they call this right-wing self-empowerment, and we take
offence to that because we want to reserve this term to left-wing
empowerment. And this this fits into this
narrative that the right-wing has been able to take this non-conformist
narrative from the left and position themselves
as the true counter culture. And, of course, this does mean,
and I have to say this first, of course, even with this way of thinking, you
can push the very authoritarian ideas and states of mind, but what I, when I
chose like what is the view of these populists on the world, what other other –
how do they view the truth? How do they present the truth? And is this is very clear, and
this is where it started my thought
process. Let’s compare this to the the clearly conservative
authoritarian movements of the past. If you look at the 1930s and
later, classic conservatism didn’t know this
idea of anti-authoritarianively. If you look at Berkeley, for
example, he said, look at your
forefathers, accept that not everyone is actual able to have a complete opinion and
and lead a state. If you don’t, if you’re the head of a family,
or at least, then you’re not predestined to lead. If you take fascism, this idea
is suspended even more, the the
mystical leader of Fuhrer that brings the
truth of forefathers. Even in left-wing authoritarianism, you find this idea of someone having
their truth of the party is doting out
because not everyone is able to
understand Marx properly, so this seems to be a difference. And Eric Hoffer, an American
publicist, the term “the true believers”,
that in the 1920s and 1930s, he defined those people who are so subjugated by
their own self, by their own individuality, that they
deliberately sacrifice themselves to the bigger
community. So kind of a self-extinction
fantasy. I think that does not apply to
the IFD and similar people. These people are proud to
understand out of the mass. They do not want to dissolve
themselves in the bigger community, and then these people
that are marching, that’s a different case. The core of right-wing extremism
is about other things. Often they’re being interpreted
as individuals who suffer from this unleashed majority who left
them behind, and they’re without orientation but I don’t believe
that to be the case. I think they’re the product after
radically individualised society that tend to develop themselves
radically as individuals. So, if you buy into this
interpretation, if you give them the chance, then a lot of
phenomenons are easily explained. Think about organisation Chaos
in the AFD. The permanent in-fighting for
the leaders of the party about questions that don’t really have
any effect on the voters. You would think that voters would
have a disciplined movement, but
there’s no trace of that to be seen. Looking at the social
hierarchies, it looks like the the kinder
Garthen of my children. That that doesn’t fit with the
classic discipline anywhere young terms, even with the thesis that it would be
charismatic leaders leading the parties. ` young terms, even with the
thesis that it would be charismatic leaders leading the
parties. – even trump, the central culture of this person.
Even with Trump, you should see him as driven more than the
leader. It is true that he can completely ignore all kinds of
accusations from the opposite side, and there is this Faustian pact, but if you read
about it, how people perceive him in forums, as soon as he
takes one step out of line away from the America First agenda,
people are immediately suspicious. So So trump acts like a famous
protagonist of the French Revolution, I have
to follow them, they’re the leader. Trump is just trying to satisfy
this space. In the past point,agenda, people
are immediately suspicious. So trump acts like a famous
protagonist of the French Revolution, I have to follow
them, they’re the leader. Trump is just trying to satisfy this
space. In the past point, point, father is teaching,
punishing, teaching, not as a moral instance, is that true for right populist
party leaders? Hmm, I mean, of course, this is
a little bit manipulated. I could also show you the very stern-looking stern-looking
Viktor Orban here, but they don’t fit the strict father
figures of the past, more the Buddy fathers of today. You can
stay as you are, keep making the jokes that you do, you can
keep writing as you have, you can
still eat still eat your Schnitzel as you
did before, and those are the – that might be the biggest
strength and also the biggest weakness of populism. It does
not demand anything from the people neither morally, or
intellectually, it just takes them as they are. To make that clearer, maybe with
this individual self-empowerment, you
can commit any kinds of mass manipulation. When I saw this ten years ago, I thought it made a lot of sense,
and many of you know Fox News, I guess, a
conservative news station, or maybe you should say the Pravda of Donald
Trump. The slogan of slogan of Fox News
is “You decide, we just do the reporting. You make up your mind. ” Of course, if you know what
they’re broadcasting, it a a it’s a
little bit absurd. It’s a congenial position to be
in that lets people be who they are and make their judgment.
That’s why populists are not any followers of romantic school or
knowing a mystic ideology but rather prop opinion Nantes of a modern
rationalism. It’s strange to see the conclusions that they sometimes get to this
also, a big stream of
anti-intellectualism which doesn’t mean celebrating
stupidity but celebrating the common-sense of the common man
on the streets against the abstract realities of science.
In some consideration that might make sense but take absurd
forms, for example, this Republican Senator
from Oklahoma who tried to to counter climate change by
bringing a snowball into the Senate floor, but most people don’t argue like this.
It’s pseudo scientific, they try to use the axioms and methods of
science. Many of these people have tried
to teach me about what the world is really like. It’s often people with
backgrounds who have their own analysis of
climate change, for example. The question is what does it do
with people if, on the one hand, you use the methods of science, and the
axioms, and, on the other hand, everything
that is published contradicts that. So anything that contradicts
what you yourself Googled for many, many nights, and that
leads you then to assume that something is going to be foul in
the state. The few upright, outstanding
ecological scientists are being subjugated by the mainstream
opinion. And so they often drift off into these kinds of
conspiracy theories. In the late 1960, the hippy
generation was don’t trust anyone over 60. Indeed it seems,
many people over 60 trust only their own perceived
reality. And here, the leader behind the
Brexit movement, I’m not asking people to trust me but to trust themselves, so a
very typical address in populism. And indeed, our societies I
think are marked by contradiction, and
that’s what modern populism thrives on. On the one hand, we have more
and more islands of high specialisation in areas of society that we with common
sense cannot understand any more, and, on the other hand, we’re being told to
be critical, to be critical individuals to to question
everything, and not just take anything for granted, and this
kind of tension is what populism really thrives and grows on. When, in the 1970s, people first
noted that the trust in institutions
was declining, many reacted saying this is a good sign for
democracy. Especially in Germany, which is traditionally very harkical and
and – very hierarchical and authoritarian. Nicholas Lume, a quote, those
who distrust require more information, and, at the same
time, they become more dependent on less information which makes
it easier for them to be deceived, and and you can see the results. Yes, almost done, I will just
skip one thing: what do we do with this knowledge now? This may be contrary
interpretation of right-wing populism as anti-authoritarian
movement. Like all political scientists before me and after
me, I don’t have any panacea, but I think interpreting it this
way, many things are happening now that become clear and it
helps to understand what is coming in the future. I will
start with the positive news. There is a positive aspect to
this. . Since we talked about so many
historic allegori sext, with a mind set
like this, uh allegories – it has a clear
goal. If you look at the US, for example, of Trump, he’s done a
lot of damage, but he’s not able to organise in this way. There
are still more than 10,000 positions unstaffed in
Washington, DC. You can even see that Trump’s
closest confidantes and closest staff have refused to dismantle the
democratic institutions. So that is maybe one way that
this can be stopped by doing nothing. It’s often said we should do
something like reducing inequality, do it
non-violently, but I think the the institutions – and it’s
really hard to take citizens along for that. This kind of
language might be counterproductive, every time we
talk about populism, we say we should talk about it more, we need to
include people more which raises more expectations which will
ultimately be disappointed because there is no such access
of the individual to politics. Populism will always win here
because they don’t accept this kind of contradiction between
themselves and the world, so I would like to end by advising you all to to go calmer
into the day, to keep going for the long run, and have
a long breath. I spoke with the organisers before about not
having a of time. I would love to do Q&A but we
don’t have time for that, but I
promise you will be around here. If you want to violently
contradict me, I will be around and I won’t
walk away! Thank you. .
>>[Applause]. – Thank you for being here. We have a few seconds to change
the stage, and to get ready for the
next session. I am still fascinated by looking
at Moby Dick through this this entire station here. I don’t
know if you’ve done one of these reading competitions in
elementary school, but in the main hall in the front, there is
this nice wooden table with a big buzzer, and you can
read Moby Dick, and whoever can do it without mistakes longest,
there is already high score on a blackboard there. elementary school, but in the
main hall in the front, there is this nice wooden table with a
big buzzer, and you can read Moby Dick, and whoever can do it
without mistakes longest, there is already high score on a
blackboard there. Whoever can do it for the
longest time gets a free entry to re:publica
for free next year. I can’t participate in the drawing. See
how long you can read Moby-Dick without errors, and it’s also a
very interesting text. Everybody knows it’s about a whale but
nobody’s read it! So that fits very well with this
year’s motto with tl;dr. Okay, so, I take a look behind
the stage. Yes, they’re ready. So I will switch to English. And And .
Next for us is Sarah Spiekermann, and she wrote a
book about digital ethics. After the talk, there will be the
chance to get the book signed. It’s brand new. The session will be slides from
her book, and the talk is about the
human progress, ethics, and the nature of digital. Sarah
Spiekermann is a professor for business infomatics at the Vienna
University for business. Welcome! [Re:publica theme music].
[Applause]. – Hello. Thank you for coming to my talk
this morning. On one of the biggest projects
in my life. Which is, and was, to write a
book on digital ethics. You know when you start this
project, the first question is: what is
ethics? And, when you stop to think
about it and read into it, you find a
wonderful thinker. A wonderful thinker who lived
2,500 years from now – Aristotle. He had a concept about ethics
that he called concept about ethics that he called “eudominia” which is a state of
well being, a state of well being
that we as humans should strive for
throughout our lives, progress in the antique sense, and ethics was about being a
good person, growing through your life because you become generous. Because you become courageous. Because you become
knowledgeable. Because you become wise. This was what ethics was all
about in the old times. And this was also what progress
was about. And when you think about that,
you have to ask why do we think that progress is about giving
citizenship to dogs? I’m wondering about this
question. How can we be so so crazy? There seems to be a very strange
belief in that everything that we invent, that is new, is automatically good
just because this doll is walking on
digital, she’s great, she’s a better human. I think that we will never be
able to progress if we don’t understand our own history of thinking about
progress, and what is means to be a good
person. And this man here on the slide,
Albertus Magnus played a crucial role in our current thinking. In
the 13th century, he actually used, for the first time, the word
“innovation”. And until then, as I said,
Aristotle thinking was the most important
one, but he said why don’t we make
progress by innovating and bringing things into the world? The 800 years that followed him
meant that we think this way today. We invented the gunpowder, we
Kohlonised, we invented the clock and the printing less –
all those innovations, they made Europe, and many parts of the
world wevic, and so we thought, “Well, then, new needs
to be good, right?” Now we’re at the point where we also think if
we give up driving, that’s good. If everyone runs around with
headphones and Alexa, and Siri speaking to you while you go on every path
through the the world, accompanied by
virtual speech assistants, that’s good. Perhaps humans
should be replaced. That’s good. Even my students now think that
this is the future of thinking and
teaching, that they get their courses in
the bathtub. Now, I would say before that, I
think that digital is very much like a
good glass of wine, yes? It feels good, it adds to to to sociability and so on, but
there comes a point where maybe it’s enough. I think that these past months,
if not years, have shown us that there
is potentially an overdose, that
there are problems, that we are drunk,
because we have 250 million bots on
Facebook. We have Brexit. We have Trump. We have planes going
down. And we are putting people to
jail because of algorithms. Against this background, I’m
wonder ing wondering – this graphic comes from my book – is
where we are on this curve? We have to digitalisation, and
what we strive for is progress. digitalisation, and what we
strive for is progress progress. It may be that we are at an
inflection point, that we’re not progressing any more, but we are
regressing. And the question is: what can we
do? What we need to understand is
what it happening that the point. at that point. I think all the
the negative events we are currently observing are
really just symptoms of one underlying
reasonnegative events we are currently observing are really
just symptoms of one underlying reason, and that is the very nature of the digital
fabric itself. My hypothesis in this talk is
that, by only fixing the the symptomatic
problems of the digital fabric, we stay
at the surface of things and do not go
to the very bottom of what we should be
looking at. Because, if we just fix the
bias, in judicial software, if we just
fix the the box, the software problems
in the Boeing 87Max, do you think that’s really the end of
the story? Let me tell you what I mean by
the “digital fabric”. It’s best to understand that, if we look
at all the other fabrics, properties, and products that we
have, they all have a nature. There is a difference between
wool and polyester. There are great differences
between foods. Now, we pick or harvest
materials like grains, and we then process
these materials and and produce a new
product with the help of machinery. Now, think about it: it: analogously, you could say
that we code bits and process them through software with the
help of hardware. And, at any point in this chain,
there can be quality problems with
ethical input. This is why we are obsessed with
this. So many people these days – I
don’t know how you personally do it when you go to the stores – but I
recently have started to look at nutrition labels. So many people
are looking at this to try to understand what can they
eat? What can they buy? What are the side effects of
pharmaceuticals? In almost anything we consume,
we want to know what is in the product
and we then can take rational purchase decisions and we can also adapt
our behaviour wise ly wisely when
using these products. And now, what do we know
respectively about the digital fabric? This
does anybody think about that? Now, in order to discuss this
digital fabric, which is, you know, the
other products are tangible, the digital, you think it’s not there, because
it’s intangible, but as I just said, of course it’s there. We
receive movies through it, for example, so let me try to to
approach this to show you what I mean by
the digital fabric and how value
effects and ethical imports come along as a
result of that fabric. For that reason, let me turn one more
time to the physical world. Let me talk about wine. So a nice way, we know, for
example, that the raw material in alcohol
is Ethanol, and that it is a light, burning, spicy-smelling liver
poison classified as a drug. What we know is the absorption
rate, and our liver function actually leads to – and this is
important – to certain value effects, because at a
certain level of alcohol, we know from
ourselves what do we see? Socialability, talkativeness,
laxity, perhaps unforeseen levels of generosity, and, on the negative side, we
see nausea, fatigue, loss of
control, health problems, addiction, and this is
why we put the level of alcohol on the
bottle, because if we see it, we know already that the vodka has
a different implication than the beer, right? So we can behave towards that
material. Philosophically speaking, and technically
speaking – sorry for being a little bit academic now – is
wine is a value-bearer. At the physical layer, wine is a
value-bearer. It sets free, bears values such
as health, generosity, friendship,
and so, and because it’s a value-bearer, it leads to these tipsy moments of
friendship just as much to violence. Again, analogously, what do we know
about the digital fabric here? What is happening here? First, what technical
dispositions does the the digital have that makes it a value-bearer,
and, second, what are the value
effects? Systematically – systematically? Now, in my book, I say we have
actually two categories: we have, first
of all, relatively stable traits in the
digital fabric that are coming in anything we consume that is
digitally, and that is due to the very nature of bits
and software, and there is a second group – these are
self-made problems. Privacy, network late tenancy,
addictive hook mechanisms in user experience design. This is,
you know, we don’t need to do that. We can get rid of it just
by design. But, there are also
characteristic and properties like calories in the
digital fabric that maybe, that anything we see will always
suffer from them that is digital, which are those. It’s only a start, but limited
completeness, limited reliability, and high orderliness are three
traits that you will see in anything that is digital. Let me try to explain these
three. Limited completeness of the
digital. This is a photograph from my own
wedding. You know, when you’re in a a a
church, you’ve been in wedding, I presume, it’s such a beautiful
moment. What gives meaning to us humans
in such a situation are the values that
we all share. So, for instance, me of course
being very grateful that I’ve finally found a husband, being in love, love, my
husband is proud, hopefully, and thankful that I’ve finally said
yes. Our priest here in the back, who also smiles because he’s a friend,
friendship, and then, of course, this place which was a
thousand-year-old very magic place. And now think about those
values that are in the room here. Thankfulness, gratefulness,
love, holiness, friendship, magic. And And what do you think this
chap sees here? What does this chap understand of the human
world? Well, he has a camera system
there, he sees a couple, wedding, wedding
dress cost, social class, location, age – age difference –
emotion one, positive opinion if he’s a very advanced system, as
most data these days are looking for, he might be also having access to my pupillary dilation, the the skin
conductive data and say it’s an authentic smile. But how can he
know about love, sympathy, and so on? Just because he sees me
smiling? Perhaps computer scientists would say, so, I’m not sure, because
perhaps I smiled because there was a cat running down the the
aisle. This was the reason why I
smiled. It was a funny little thing, and perhaps this camera system
didn’t have access to that camera and
interpreting it as my smile being happy about my husband.
Where, to tell the truth, the cat wasn’t there, but the point
that I want to get is that following. Think about sympathy
and love. You look at someone, how often
have you smiled at someone not meaning
it? Has it happened to you? You just wanted to be polite. Now the computer system will
have the facial expression skills, and everything, sees you
smiling, and thinks, “Oh, she likes that other person. There
is friendship.” But, in truth, it’s politeness. How can the
machine now? It can’t. Moreover, even if you know you
took contextual data and tried to anticipate this is husband,
and as a result of the fact that it is husband, then
there must be love and sympathy, that would be the kind of
rational reasoning that machines do, well, in fact, not in this
wedding, but in general, you can have a radiant smile, you look
at another person in truth, your radiation comes because you’ve
been thinking about a great date last night. How can the machine now the
machine know? It can’t. The thing is, machines have a
very limited pixel intelligence that is very different from
human intelligence. It would still try to understand
the the situation. It will reduce the input it
receives and it will apply machine learning, looking for patterns that try to
to tell it what is really happening. And And thereby it converts
reality into a limited set of patterns,
and any digital programme will do that, especially artificial
intelligence. And by doing that, it will
potentially, because it has no access to meaning, throw out
information like the priest here. Why? Well, perhaps that algorithm, that was trained in
Russia. In Russia, priests wear hats, and if the AI that was
used in this context was trained in Russia, it would have probably cut out our priest
here, just because … .
. .
. .
. .
. .
. .
. .
. .
. .
. … these days. I’m not surprised, because these
are all systems of one underlying problem, and that is the law reliability
– the low reliability of the digital fabric. This is why, it’s my first
candidate for for … and I want you to take this away
with you. Correctness: there are many
dimensions on which we should be starting challenging the digital fabric
and do something very similar to what we do to our pharmaceutical
products or our food. We should be asking the IT
industry to work towards quality criteria
and to publish the degree of correctness and, for instance,
also the error rate targeted that they give to their programmers, and, by that,
compete on quality. Quality, not meaning
super-luxury quality, but quality in the sense of being
allowed to actually be in the market. I want to finish with a third
and final dimension of the digital fabric
that we should not underestimate, and that is its
orderliness. Do you remember the when the time was there, we were
handwriting, I put here on the right side my own handwriting as I prepared this talk, and it is
a little bit crazy. It doesn’t feel so nice and orderly as the left side which is from the
same chapter from my book. Now, what is happening is that
anything that comes digital, somehow, it
is pretty orderly, and it looks so professional. Because, the pixel intelligence,
it’s bits and bytes, and megabytes
based on a command existing in programming languages. As a
result, this looks and feels professional and orderly. But this is the problem,
because, when we see fake news online, it’s
also professional, it’s also orderly, and, as a result, we transfer our
heuristics and think that must be right. For this reason, another
candidate for Digifacts is authenticity that companies like Facebook can
publish on their news feeds. Let’s think about it, it’s just
the start. Let’s think about it. And to finish, let me sum up:
digifacts will generate understanding of what the
digital fabric is really about, how reliable it is, how correct it is, allows us to to to better understand the mirror through
which we today approach and look at our
world. So many times, we judge about
reality, not because we are there, not
because I’m an entrepreneur speaking to my employees, I look at digital
mirrors, I look at statistics, I don’t go to the country, I look
it up through Facebook. I go through the digital mirror. We need to understand to what
extent this digital mirror is actually
a blind mirror. In we only understand if we
know, for instance, that judicial software
is 45 per cent correct, and it has so many error programming targets, or
that it is not authentic. So my last word on ethics and
progress is that the true challenge is
how we can get out of the Matrix. Thank you very much.
[Applause]. There will be a book signing
afterwards. Where? Somewhere? Do you know where?
– Yes be I have to know. It’s a book signage place!
– So you will meet your guests who are right here at the stage,
and thanks for the talk. If you want to get your book
signed and meet Sarah Spiekermann, right
next to the stage. There will be a short break. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. – [German spoken]. Digitalisation us to think that
we can solve every problem with technology, that we can solve every societal
conflict with technology, and this is what the next talk is
all about, solution and tech religion. Please give a warm
welcome to Oliver Nachtwey. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. – All right, thanks a lot for
having me. I’m really happy to be here. And maybe at the beginning, I
have to, you know, be a bummer a little
bit. I’m a researcher of society, and I’m really, you know, to us as
researchers, it’s very hard to hold these
kinds of talks. And And large parts of my talk, I’m
just going to read out loud, but I’m hoping it’s going to be entertaining,
and you’re going to be able to follow my thought process. So, the spirit of digital
capitalism, the industry Silicon Valley, they claim to have a a
mission. The leaders of these industries
claim to give us solutions to improve
the world, but for cynical people,
this doesn’t sound right. We found out that they don’t pay
taxes, they improperly use their monopolies, and and we have the problem of fake
news and misinformation online. This seems to be a sort of
camouflage, this rhetoric of improving the world, but we should still take it
seriously, because in these industries,
they’re the products, they’re very attractive. The technologies have spread and
disseminated into people’s everyday life, and network
effects make it hard to resist these new products, and, of
course, they’re not stopping there. They’re investing billions into
research and and development. And lots and lots of people are
trying to get jobs in the tech industry. But, of course, the
biographies of these people tell a different story. People who are overwhelmed and
have this idealism about improving the world, and using technology, and we can
problem estimate that at the beginning of their careers, they didn’t expect to
end up in an industry worth billions that
is so crucial to the futures of our
society, and I think this is also very
interesting to the people here at re:publica. In Basle, in our research, we
used automated research and we used
qualified interviews with employees from
the tech sector, and and there we meet
people who really take Google there we meet people who really
take Google’s original slogan “don’t be evil” seriously. So, this rhetoric of improving
the world is sort of a camouflage,
but it’s definitely not bullshit, as the
philosopher Harry G Frankfurt put it. So this is part of a new sort of argument argument, new
ideological imaginary argument for the
digital capitalism. Capitalism, which is what our
society’s all about, always needed an
ideological imaginary driver, and the
sociologist Max Weber, in his work The
Spirit of Capitalism, and Protestant
Ethics, brought all of this back to this this way of living of
Protestants in Swiss in Geneva which was
influenced by Calvin. So, what is special about this
spirit of capitalism? This This puritanism, this value
veinism brought about a very controlled, a very rigid way of living that
was all about material success in the
material world. Because they didn’t know what God thought of
them. And, as Weber put it, it was all
about the accumulation of wealth, while, at the same time,
not spending too much, and leading a form of ascetic life. The thought was that God showed
his chosen ones by giving them
material wealth in the material world. And And Weber said that successful capitalism wouldn’t need this
sort of Protestant ethic, the spirit any more because it became so ingrained
within every people. But he also said he left a sort
of window open because he, as he
put it, may be we will once have new prophets of capitalism, a new
spirit of capitalism, and that’s the way I see it. We have these
multiple spirits of capitalism, and, in our
digitised age of capitalism, we have these monopolies, and we have to the spirit
coming out of Silicon Valley, and this is about an ethics of solutionism that’s
about improving the world but also brings about a metric way of living.
Technologies and artificial intelligence are not just tools
of this new capitalism, but they’re
sacred objects that define the way
society lives, and what happens to
humans within this techno-capitalism is some
sort of new Prometheus. People don’t go to church, but
there’s still a spiritual element to
this this digitalised capitalism. So, historically, a German person, a family of
bankers called Vuger, was identified as
the embodiment of the spirit of capitalism, but also Benjamin
Franklin was this sort of self-made man who accumulated
wealth, who worked hard, but who also
lived in a very ascetic life without
luxury. And afterwards, in another form
of research, in the last quarter of
the 20th century, we saw that
there’s a different spirit of capitalism
that already used these network
attacks within the new digital networks, and this spirit is all about this
new – a network pollers where work
is organised within networks, where connectedness and
authenticity is the central element. And the project-maker is the
central figure of this old spirit, but
now, I claim that there’s a new spirit of capitalism. So looking back at the history
of technology which neither Weber
nor Potanski looked at really, they always brought about huge
revolutions in technology, and so this was the typical way of
showing the industrial revolution, and today, the
fourth industrial revolution. It’s different to the other
three because technology and life are
closer connected. They don’t just change our lives
in a material sense as with the first
three revolutions. We just have progress, we have more material wealth, we have
automobiles, washing machines, and then personal computers. And And we could have a more
rational day-to-day life. But the fourth revolution brought
about a sort of embodiment of
technology. You all in the we rarely put
down our smartphones. It tracks our steps, it counts our steps,
and it listens to us. So this next susof an – this
envelope exsus of an economic – this
nexus of an economic way of life. I talked about Vuger and
Franklin, but these researchers also talked
about a new order of argument . And so this method of reconstructing
analysis is something that I want to apply to the
digitalisation now. What we can observe here is a
specific rhetoric of the internet companies of the big players that found their existence and the head of Google
said, for example, in March technology is not about hardware or software any
longer, it’s about processing the amounts of data that we
generate and make the world a better place. Similarly, Mark
Zuckerberg, the founder of Facebook, stated that it is the
mission of the company to create a more open and more connected
world. We want to wake up in the morning and the goal is not to make more
money. The CEOs from the silicon
valleys from the heroic figures of the digitalisation. The restless people who have the
next ground-breaking ideas. It is failing is part of the
process, and it embodies the nerd. It is
embodied by the nerd that primarily doesn’t want to make
money but wants to make a the world a better place with their
applications with his or her start-up, and while they are
successful, they don’t lean back but they continuously work. The CEOs of million-dollar
companies get up there with hoodies and sneakers, one
like me in Silicon Valley would be regarded as somebody from the middle
management in a suit. A large amount of money is being donated to to – so it’s what I would like
to call the the politics of solution. This ideology of sociality, the
goal to get these complex circumstances and interpret them in a way that you
can have concrete problems,
transparent self-evident processes that, with the right
algorithms, can be optimised. The bottom line here: this idea
of solutionism in Silicon Valley goes back to to Arnot Erin who regarded the
human as a thing of flaws. So the human is a thing of
flaws, needs to be encountered, and it needs to be helped. And the prophets of Silicon
Valley kind of take on this idea, and the world is small of little bugs that are
not systematically implemented in
the world, and they are not reproduced, but they are solved
and fixed with our technology, and all these small
and big bugs with a programme that might crash every once in a
while. The goal is to fix them all, so solutionism does not have the
roots with Gielen, the German sociologist, but more in the 1968 revolution,
especially, specifically from a tech-orientated hippy movement in California
that has made a name of itself in the
Californian ideology, and, within the Californian ideology,
there’s a selected kind of mind set about the the – one of
the biggest lies of the Silicon Valley is that the the
state wasn’t involved in the
development, the free development within Silicon
Valley is what it going to bring the world into a better place. So mobility, optimisation such
as brought about by Uber are being
connected to a welcoming idea. Air B and B is the figurest
company to offer places to sleep. The platform in the
beginning was really close to what sharing platforms such as
couch-surfing are still practising to this day. One of the causes for these bugs
in the world is basically found in
institutions and administrative agents, so solutionism looks at this in a
more libertarian way. Regulations are standing in the
way of the fault of development of
human potential. So they only protect certain
particular ideas, and that’s why Uber in the US, for example, criticises labor
laws. So solutionism is focused
heavily on companies and financial
resources to to advance in the world. world. If I want to
reach a lot of people, I have to create something really
quickly and radically commercialise what I create. For solutionism, these kind of
ideas can be translated into
technologically solvable problems, and they’re a bit of a
trade-off between freedom and security, or democratic
compromises. So democratic parliamentary is
regard parliamentary issed often as an old kind of technology. Solutionism, because of its
libertarian attitude wants to substitute politics with
technology, and that’s why the focus is so heavily on the
concept of the smart city, or communal infrastructure is being
handed over to tech companies, because
traditional regulated capitalism and political democracy is being disregarded,
and you want to create these kind of liberal islands
that can be self-governed. These old anti-cultural
capitalism – counter capitalism is a bit of a cyber politics. You look at the world not in the categories from classical
sociological theory of classes but in the question of the version of a programme.
Society 1.0, 2.0, 3.0, 4.0. So this kind of internet-centred
view of the world is a
socio-technological person who who, that is paired
with a future potential that
technologically, physically in the future, is what is the measurement of
success. So So, the standpoint of what is
physically possible is what it the driving factor. Why is our society so far from
this is, according to the CEOs of the tech companies, basically, due to the
lack of information, connected
information, and that the the most potential
within the information connectivity amongst humans. So, in the puritanism, for
example, success, economic success, and
solutionism, is measured by the betterment of the world. So this kind of resolution is
the destruction, and this goes back
to to Christiansen, was used by – Austrian economist –
so this this is really about how you can transform established
companies in the market. You can look at Amazon, for example, that fundamentally changed our
patterns of consumption, and also the way that the market works itself, so
implementing artificial intelligence, Apple created a
completely new market of telecommunications, whereas Uber
is breaking up the classic modes of transportation. It’s an endless scaleability
that is being applied to these companies, and they have a big
self-esteem. If you can influence the life of
– if you cannot influence the life of 100 million people,
you’re not successful, but you’re only successful when
you’ve changed the life of a billion. So this this promise of
overcoming capitalism, there’s to go going – it’s a false
promise because it’s always going to be a digital capitalism, and so digital goods don’t have any
margin nationalist costs, and instead
of – marginal costs, and we don’t see monopolies as
something bad, only these big monopolies allow the
investment into moonshot projects, for example,
Google’s’s Lab. So, up until here, I’ve looked
at this spirit of digital capitalism. From the normative foundation of
the found ers from the normative foundation of
the founders but if we take a different view on. Weber was interested in
religion, but Kapanski, thought after Weber, religion lost its
impact in society. Even for Weber himself, this
disenchantment of the world, everything magical was drained
from the world, was drained from religious life. But I think Weber and Potanski
were wrong about technology and also the connection between
technology and religion. They thought of technology as an instrument of rationalisation,
of enlightenment, and and smart
people people, enlightened people wouldn’t be detected to
religious thought, but this wasn’t the case. A lot of technology historians
always drew up this picture of a sort
of technological religion, so it wasn’t just about instruments,
it was always about improving human beings,
improving human beings’ relationship with God, and so think of the philosopher
Leibniz who invented the binary system in
the 17th century but also talked about the problem of the ODC, so making an argument for the existence of
God. And thinking of technology as
not related to religion is a really
result of the 20th century. And so, when we look at the
nerds out of Silicon Valley, all those inventors and developers who think about
bits and bytes all the time and might be privately part of a religious
body, but definitely because technology grew to this point in
our everyday life, there’s really this space to fill, to give us meaning out
of these technological solutions, and so,
for Weber, this spirit was stayed in
the background. The meaning of acting as human
beings in the world wasn’t graspable
for human beings, but nowadays, I think this spirit of digitised
capitalism is really founded upon the sort of tech
religion, and this magic I mention of tech religion shows
up in the foreground. It’s very well visible to us,
and so different researchers in the
20th century showed that technology
always has these sort of goals. It’s related to science, but it
is also a form of power. And Durkheim was also interested
in the effect of religion in
society, and so he said that especially in
the religious appraisal of totems,
so, for him, for for Durkheim, it was small
individual totems, that these totems were
able to become artefact, to claim
qualities that didn’t exist at all, and, as he
called it was manna, this sort of creative force that was able to bring
order into life. But this new spirit is of course
not a mono theistic one but a
polytheistic one. It uses different parts of different world religions, and there’s no
coherent met at that physics coherent metaphysics binding it
all together. There’s an solution to it -s
have an as Lukes to it. The gods that we create are
created in men’s image. bsolution to it. The gods that
we create are created in men’s image. Different beings because
these gods are perfect, unlike humans, so this is applied to charismatic persons
in the world. Every religion has its own
prophets, and magical qualities are being attributed to these
people out of Silicon Valley. For example, Steve Jobs, the
creator of Apple, he wasn’t just called a great inventor, but he was also seen
as this sort of holy person, as a sort
of prophet who was able to put put
– even to to to make technological
products and gadgets into some sort of Holy Grail. So he really every time he
showed new products to the world, it had this sort of religious drive, and so even
his followers, they waited for days
in front of these Apple temples, and
there is this sort of ascetic element to
it. For days you get nothing, and then you get the
enlightenment, the new product. Ray Kutsweil, who is working for
Google nowadays, he thinks that technology will bring this sort of estological thinking of the end
of the world, of the ap ASMT apocalyp apocalyptical end of
state, he said in the technology that we are in now, he said that we will have
the era of the sort of human machine
interaction singularity, and the singularity is not just about a
hybrid interaction between human and and machines, but it’s
also this new step of evolution, and
and Kurtsweil says that singularity will improve the human creative forts
into creative force into unknown limits. And we’re going to
create new people that’s going to go far beyond what evolution
was capable of producing, and it’s going to be a turning point
point. Civilisation is going to stick, it’s going to be a human one, but the
term “humanity” is going to become a technological one. It’s
not going to be biological any more. And so exponential growth is
going to keep happening. It’s going to be this
accelerated growth of technological
improvements and societal progress. And really, this is sort of
Moore’s Law – every two years, you get
double the processor speed – but for societies. So this
combination of human and machine and intelligence isn’t just the
end. There is also going to be the
awakening of the the universe. And this fantasy is the sort of
paradise where algorithms and artificial intelligence are
going to be able to control the progress of society
and control society itself in a way
that no human ever could. And so, this tech religion doesn’t have
a creator as other religions do, but the creator is created himself by
humanity’s technology. So what it comes down to for
Kurtsweil is that God isn’t dead. There hasn’t been a God
yet, and God will be created in the future. So his position within alphabet,
within Google, as the pioneer of digitisation, we can see that this this isn’t
the imagination of some of some
weirdo, but really, there’s a legit myself
for legitimisation, and gene editing
and other considerations. He also founded a worldly
church, a spiritual centre for this tech religion, and, three
years ago, I took part in this that, and I’ve met lots and lots
of people from the management of
big German companies, and all of these
people listen to his prophesies on the
future of humanity and intelligence, and, before I get
to an end, I want to speak about transhumanism a bit. A rhythmic perspective, the
first aspect is artificial intelligence created by neural
networks that learn by themselves, and where prophets
are expecting that it will be able to take on the complex cognitive level
of humans and overtake us. And then there is bots such as
Siri and Alexa from Amazon where AI has
reached our personal lives, and there are a few reports that
this intelligence is soon going to be artificial instead of just doing what people tell Alexa to,
she might start to laugh creepily. Not everybody – people don’t
know why. Maybe she’s laughing about us humans because we
partake in this whatever it is. So this essential points of
algorithmic intelligence is that she in the
future will overtake human
intelligence, but also that we cannot understand it any longer,
because artificial intelligence, based on deep learning, is
emancipating itself from their creators, and so in itself is going to become a
God-like creation, such as the dios
absolutos, like the puritans imagined, like this data-driven
idea of absolute knowledge and the collective knowledge and
the collectivisation of – so the second aspect of this
Godly aspect of digitalisation is that it’s about humanitarian and non-human
ideas, and there is this obviously
people and ideas that existed before now so
that we can separate the spirit from the
flawed body, therefore can become
immortal, and by finding solutions for human
body issues, so the humans
subjectivity we will upload our spiritual ideas,
and they will become immortal. At the centre of this this digital human machine
religion, it is about the immortality and the technical transcendsy of the
human, and this perspective of the humans
is usually summarised as transhumanism under ideas of this earthly paradise goes back to
the beginning of information technology, and this idea, this vision, to to
merge humans with technology. They were becoming transhuman
that are advancing via implants and changes within our DNAs, and, by that,
become immortal. This solves this idea of it … so, this idea of technological potential, what is the norm is
what it possible, and and it will impact all of
humanity. And it doesn’t stop with the
human’s biology. We’re going to grow over
ourselves. Humans, and human machines, technology is the means of transcending. We should mention, mention, though, that there are
a lot of prophets from silicon who aren’t sure about the
singularity in secret. They might believe in Biblical
things like the coming apocalypse because they’re
afraid of civilisation breaking down, and they bought a lot of
islands off New Zealand! A last point that I want to make
for Weber, the spirit of capitalism was something that
was expressed in the concrete lifestyle of every
single one of us, and this kind of
lifestyle I want to talk about in the sense is a
metric lifestyle. For Weber, most world religions
promised health, long life, and wealth. The puritans introduced
introduced – they kept a religious diary
where they would keep track of their sins. Today, we don’t need
this diary any longer because our smartphone is keeping it for
us. The smartphone counts our steps,
it measures our heartbeat, our
frequency of procrastination, it saves our search histories, and
shopping patterns. Every single sin is potentially visible, and pote potentially
eliminated. In the puritans, everyone was on
their own, and they couldn’t really express their things, so loneliness was
of the individual – loneliness of the individual was becoming
more apparent. Today, if we talk about individualisation such as
… another sociologist, another different direction to Kurtsweil speaks
about the practice of individualisation, but today we
encounter in the digital economy and beyond that this inner world is
is – or austerity, the digital
quantification of the social as Stefan Mau, Berlin sociologist calls it, provides a universality of competition that
is gained by the metrics of the
individual, and you publish it through means of Facebook, and social media, such
as that. By that, we have this kind of
ethic resolution, technical resolution, tech religion, and
this is what I called the spirit Fe of capitalism in the
digitalisation in my talk today. The puritans Protestants adapted
but in a digitalisation, you have the life so-called in your
own hands. But, at the same time, your God,
or our God, is just as hash as the one
of the puritans as it documents every single step we take, so
this idea of trans human transhuman will only be
the salvation for the happy few from the Silicon Valley. For
everyone else, there’s a new danger of digital rule as something
that what Weber called the iron cage, but this time around, the
iron cage is made up out of algorithms and our our
freely provided data. For the average sinner, the digital spirit of capitalism will have
few spaces, and out of this way of living, and this iron cage, and this
power of digital big players, and big companies, we need to
start a new conversation that I think was, has been started
here at re:publica, and I think it’s something that we need to
advance, and we need to have it as a debate about the future of no-strike and the the democracy, and the future of
society. Thank you so much. [Applause]. .
– Your applause. [Applause]. Do you still have energy for
Q&A? – Yes, yes, of course I do. I left, I talk quickly, so there
was going to be a little bit of room for Q&A. Up in the middle, we have a mic.
Do me a favour: raise your hand if you have a question, and then the
mic will come to you, or you will come to the mic. There’s
one in row 4. Say your name fast and then the
question. – Hi, thank you so much. That
was really, really insightful. I have one question: because
Kurtsweil introduces this term of transhuman, and it’s a a
processing idea that you didn’t use as a term as
a posthuman, how would you define and categorise
posthumanism that is reached? reached? – Yes, I didn’t go into details
in the talk about that, but if we look at the work of Morabeg from the MIT,
from the work that started in the 1950s and the 1960s, you see
that, especially these people that were working at universities, focused
on technology, even from Wiener, there’s a a book about God and
cybernetics, coffer coming humanity, that is
posthumanism. – How would you categorise humans about metahumans? – I would have to look at that
in detail! I read it as sort of from a
viewpoint of religion history, and, yes, I
can’t say much about that right now. now. – Thank you so much for this
great talk. We are obviously a church here!
When one of the most essential questions that I have this
weekend is the inKong inKong rans of state technology
– the way that technology is acting. congruence of state technology –
the way that technology is acting. Smartphones, and the way
that we are using it. Weber – nothing has changed
today, separation of power, our forms
of of – now, we have a complete
speed-up and acceleration of technological development and we
all of a sudden have this margin lies. Do you see the potential that it willisation gives us a
societal potential to technicalisation gives us a
societal potential to – to take technology as a helper? – Yes, this is a good but not
that easy to answer question. So, different example:. When Ford started his all the mated his automated construction of
automobiles in his factories, they knew that the workers
aren’t productive enough, but in the end, this new form of production, this assembly line
led to to new organisational forms of work, because those
people at the assembly line gather new standing, and
there was a whole new way of working because the factories
grew larger and larger. If we take this and then look at
today, of course, digitisat digitisation is this
form of power, but at the same time, it’s the medium of a new social
organisation, so, when we think about Twitter today, we think
about Trump and hate speech, and, oh, my God, but eight years ago,
there was Occupy and the Arab Spring,
and, back then, Twitter was a very useful tool for organising
progressive movements, and so these technologies are open in a way, and it’s really
about how societies and communities use
this tool. – Halfway. Hi, you made really
good observations. Thank you so much for that. Now we have three
decades of the World Wide Web. We have multiple multiple – how can we include
the baby boomers who got raised in an analogue way, and the Snapchatters. How
can we bring them together? How can we functionally achieve
this? – My biggest biggest privilege
as being a sociologist is I can always point out and speak out
my critique and then always answer, “Well, I’m sorry, if you want solutions, I’m just a
sociologist, I don’t have any any! ” But I’m not that mess not that pessimistic about it,
my mum, for example, a faster typer on
her phone than me. We as a society have increased our sensitivity to these problems,
to think about and include the older
generations, and have to sort of support for
digital literacy. Of course, this is a very general point of
view, and I wish there’s someone who could give a better answer to
that than me. – Great interesting talk. You talked about cybernetics.
For some privileges, it’s not a problem if we keep the sons of today,
but I see we have these structural problems such as
sexism and racism, and we translate that, and we – that goes into
data as well. And these problems are just
being reproduced. these problems are just being reproduced.
– Yes, completely right. But, there are attempts like
like Algorithm Watch this to this
heightened sensitivity to see algorithms as what they are.
They’re not technology in the first place, but they’re produced by
certain people with certain interests. And so the discussion about biases in automated procedures
are just beginning, but I think what we see so far is pretty positive, and what
I want to talk about is look at the
possibilities we have, and then maybe even think about, you know, I’m
a a of a traditional sociologist, maybe
I should have chosen a different approach, but maybe talk to people who
study socialology to code, or to understand code, and to be able to look into these mechanisms with
a critical point of view, with sociological theories, and then
maybe point out and find these sex ist
sexist theories that are encoded within these algorithms. There was a talk called Unbiased
Algorithms earlier. You can see it online.
>>So super interesting talk. Thank you so much, first of all. I wanted to to – maybe goes into ideology criticism, that, yes, this was
ideology criticism. I would like to know from your
point of view, how what would you would describe as
solutionism and the resulting tech religion that comes out of
it, how would one without the other kind of
act, these kind of big tech players, these
big personas within these companies in a non-capitalistic
society, would this even work? Is this intertwined at the base
so much? much?
– It’s really interesting. I would say that all of these
companies have huge problems right now,
so, in Silicon Valley, you have these
two dimensions of labor problems, so
it’s about labor conditions, but it’s also about co-operating with the
Pentagon, and they have this, you know, saying, “Don’t be evil”. We have have – definitely, capitalism
has become more moralistic, and it’s becoming of more
moralistic, and these companies really have to defend
themselves, so the debate on this new organisation of work, every
company has to put out some sort of meaning. They have to to – so, it’s not
always linked to capitalism, but the potential for criticising
capitalism, like how can we change the world? How can we
improve the world? By having this focus, it really
put companies into this tight spot where they have to defend themselves
and their practices, so maybe this this,
these criticisms can lead to more democracy, and there’s a
potential for a counter counter movement there.
– Yes, thank you so much for this talk. I’m interested in the
question about the relation of the the, between artificial intelligence and and
consciousness. There’s leading thinkers who
kind of draw a line there, so artificial intelligence is exponentially
growing but we have the privilege of having
consciousness, which, for us, obviously, is a
political standpoint in how to handle these kind of
developments. How do you regard this discourse in Silicon
Valley? How do you see that? that? – In Silicon Valley, we always
have these Four Horsemen going
around, and sometimes these Four Horsemen bring about a better
world, but mostly, they’re a danger, so, we we get
this consciousness, this artificial intelligence that is
not controllable any more, and as a sociologist
of knowledge, I’ve looked at the history of this debate, and
looking back at automation, and the loss of work places, there’s
this consciousness when you look at sociololgists of the 1950s
and 1960s and earlier, soon we are going to have this new consciousness within
computers. They already had that back then, this thinking, but it
never came about, this new sort of consciousness, and so I’m
very sceptical, is that ever going to happen, and I see it as this
ideology that legitimises the actions of
Silicon Valley to sort of of – that
tries to evade societal political
control. – About religion and this kind
of comparison to religion, I wanted
to religion, I wanted to ask about the term of “truth”,
because doesn’t that also belong to religion? If so, what would
be the architectural of “truth” in today’s Silicon Valley
religion that you have sketched out here for us, in times of
fake news, and things like that? – Yes, the term “truth” is
always a bit hard to work with. When we think of telegrams in
the 18th and 19th century, we see that
technology always produces fake news. Even the first newspapers had
the same problem. So So Frank further, I quoted him
earlier, and he said bullshit is really
all about the people who don’t care if they tell the truth or
not. But I don’t think that that
they’re really bullshitters, but within
silicon value which there’s this
tolerance to ignoring the truth. And And that to me isn’t
something completely new with you rather
something that the truth within our
society has been endangered for quite some time, so there’s a
lot of talk about trump and how he deals with truth, and of
course, the New York Times is like we
speak what is true, we’re the old society,
and I always try to remind people that the New York Times and and Sudd – Zeitung from Germany,
helped invade Iraq in 2003, so truth is
always endangered, and, in Silicon Valley, yet they took
over this tendency, because they think of themselves as a
technological solutions, and not the producers of truth.
[Applause]. – Thank you very much, Oliver
Nachtwey. [Applause]. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. – So, welcome back HP how are
you? Good, okay? Perfect. Right now, there’s an Ask Me
Anything with Corrie in the community garden but I’m sure you’re here to see our
next session. There’s nothing to miss because there will be a second Ask Me Anything
at 1. 45 in the Wonderful Together
booth in the main hall. This is just a short announcement. But
now for something completely different, yesterday, we had a
great keynote from our Federal
President about President, Frank-Walter Steinmeier, about
democracy and technology and about how they might stick
together. Today, we get the topic from a different
perspective: things we cannot fix with technology. Welcome to
Nanjala Nyabola. [Applause]. . [Re:publica theme music]. .
. .
. .
. .
. .
. .
. .
. – [Applause]. .
– Hi, everyone. How are you all doing today? My name is Nanjala, and I’m from
Kenya. In August 2017, like pretty much
everybody in my country, I tuned in to one of the most unexpectedly
riveting dramas that I had ever seen in
my life. Now, it’s important for me to
caveat this by saying I’m a recovering lawyer. That means I
have a law degree that I don’t use, and it makes my mother
incredibly annoyed! But, for three days, non-stop,
10, 12 hours a day, I was tuned in to the proceedings of the Supreme Court
of Kenya. We were watching what was the
presidential election petition. That means that someone was
challenging the results of the recently
concluded presidential election. It sounds really boring, doesn’t
it? And and 90 per cent of the time, this would be true. We had
had presidential election petitions in Tenja in 1992, in
1997, again in 2007, again in 2013.
What made this unusual was that, for the first time, we were hearing
evidence, we were watching evidence being submitted, and debated, in
Kenya’s first digital election. The The 2017 election in Kenya
was unprecedented in scale. By some measures, it remains the
most expensive election in the world. world. $28 US per capita. This election
petition was something to behold. Were the judges actually
going to be able to understand what was happening? Could they
tell the difference between the different platforms that were
being used, acronyms, RFTS, EVED?
Would they be able to distinguish between the concepts
of the techies were throwing at them? What had we created, actually,
in 2017? Had we created the most
transparent election in the world? Or had we created a black box
into which information would flow and chaos would come out?
As I said, my name is Nanjala. I’m a story-teller, not a
technology. I got my first smartphone about five years ago.
I still have a very ambiguous relationship with my smartphone. I love, and one of the things we
would consider technology, is Go
vaccineVaccine, Netflix. As I said, I’m abbiggous about
my mobile phone. But, as a story-teller, I’m very fascinated by what happens when
worlds collide. When things that are not
necessarily natural bedfellows that don’t necessarily belong
together, when they collide, then I get excited, because you never know what was going to
come out of it. What I’m going to do for you here today, is I’m
not – I’m going to tell you a story. I’m going to tell you a
story about how an unexpected country in an unexpected corner of the world finds itself
at the forefront of some of the most pressing questions regarding democracy in
the modern age. You’re probably thinking there
thinking Nanjala, Kenya, come on? Are you trying to sell me
some snake oil? It’s very difficult in the way we’ve set
up the narrative of human history to imagine that an
African country that is not the most, not the biggest,
not the richest, not even in by many measures the most interesting, can be a
place of learning, can be a place of
insight. And I’m here to convince you that that is the
case. What has happened in Kenya in
2017, what we are still dealing with, the consequences of which
we are still dealing with, are issues that everybody in this
room who is interested in how politics and technology collide
should be paying attention to. Why Kenya? It’s not just because
I’m a Kenyan. It really is a combination of
two things. I’m going to be drawing from
this this, for this talk, for the purposes of this talk, I
will be drawing heavily from my recently published book called
Digital – and it’s great that the theme
of this conference is tl;dr because I’m going to give you
the highlighted investigators of the book, but I still want you
to go and buy it because, remember, I
have a law degree which means I have Soviet Union loans, right!
I’m going to give you an overview of some of the
arguments I presented in this book, the ideas that I want you
to walk away engaging with and thinking about. It starts off, as I said, with
the origins question: why Kenya? Why not Nigeria? Why not South
Africa? Why not Germany? Why not the United States? When I was pitching the book, a
lot of publishers would come back with me, “Nanjala, why
don’t you write a book about Africa? ” Africa? ” #lost the rain down in Africa.
Africa. Every time I say after of can a
like that [speaker sings]rain down in
Africa. Every time I say after of can a
like that [speaker sings] “I bless the rains down in Africa. ” Africa is one of the most
complicated socio-political constructions that exists in the
world today. [Applause]. Yes, 54 countries. But also thousands of
nationalities, thousands of identities,
overlapping intersecting, many histories, many societies, many
different political histories. Each of those identities and and
ideas collides differently with notions of what we would
consider technology. If I had written a book about Africa as a whole, so much of the the
important nuance would be lost, so much of the important ideas would get
diluted into simplified narratives that obscure the
reality of the the condition. You’ve heard talk today about
the danger of a single story. If I had written a single story
about Africa, so many important things would have have been
lost. For those of us in this room, what would have been
important is the idea of agency would have been lost. We seem to
have this misguided perception that technology is something
that just happens to people. Someone gives you a mobile phone
and suddenly we are disrupting financial markets, we are
disrupting communication networks, we are going to save
the world. But, really, technology is
shaped by people. It is shaped by the people who build it, it
is shaped by the people who use it. And so I didn’t want to tell
a single story that obscured from the fact that there is so
much agency, both good and bad happening, and not just in
Africa but really around the world. Technology is shaped by
us. And I wanted to get into the get
into the reeds of that. The second reason I wanted to write
a book about Kenya is, as I said,
an unexpected country in an unexpected corner of the world
finding itself at the forefront of some of the most pressing
debates of our time. As I said again, not the biggest, not the richest, not the most
powerful militarily. So how is it that, in 2016, by
2016, Kenya was the world leader in
mobile money? I’m assuming in this room knows
what mobile money is? Making payments through your mobile
phone, mobile money transactions. In 2016, mobile
money transactions in Kenya amounted to the equivalent
of one third of GDP. There is no country anywhere in the world that comes close – not
even Germany, not the United States – to move the amount of money that Kenyans
move on their mobile phones today. Kenya is also home of apps you
might have heard of, one which allows
people to map disasters around the world, so to allow first
responders to respond urgently and quickly to emerging
disasters. It’s been used in Kenya, it’s been used in Nepal, it’s been used in
heightity, Haiti. How is it again not a country by
these metrics not a country that find itself at the forefront of
these conversations? How is it that it becomes home to the most
expensive digital election in the world? These are some of the questions
that I seek to answer in my research, and I seek to answer
in my book. tl;dr, I’m not going to go into
some of the minutiae of some of these arguments, but I’m going
to give you the highlighted story version of how Kenya ends
up becoming this particular example, and why it matters for
everybody in this room to understand what went right, what went
wrong? What technology promised and what it delivered, and what it just
could not fix. To me, this story of technology
and politics in Kenya is inseparable. And it begins with elections. We
are one of those countries that has always had had, let’s say,
interesting elections. For the better part of its
lifespan, which is about 56 years right now, Kenya was a
one-party state. Some of you who grew up in East
Germany probably knows what living in a one-party state
feels lying. It’s being afraid your next door neighbour, it’s
being afraid of the person you’re dating or sleeping with.
Are they going to turn me in for expressing the wrong political
opinion at the wrong time? It’s university unites being
rounded up and disappeared for saying things in class which
challenge the official narrative. I grew up at the tail
end of the authoritarian regime. I remember what it was like to
be afraid to say the wrong thing in
your own living room. In 1992, we had our first
multi-party election. It was a violent election. It was an an election in which
the violence lasted for over two years. The thing that made it
unusual at the time was that most of the
violence was concentrated in rural areas. And that meant
those of us who left in the city could still indulge in the
illusion that we were that meant those of us who left in the city
could still indulge in the illusion that we were living in
“the most peaceful country in the region”. The opposition to this day
claims that they won that election, and,s
asaid, it was a presidential election decision, and the court
decided on a technicality to award the election to the ruling
party. In 1997, we had another
multi-party election. Again, there was election
violence with hundred of people killed, thousands of people were
displaced. It was concentrated in in rural areas. The pattern
repeats itself. Again, the opposition went to court, and court said on a
technicality, let’s award it to the incumbent. So, by the time the the 2002
election came around in Kenya, there was
this sense of we figured out the voting part, we have to figure out the
transition part. We stand in line every five
years years and peacefully cast our ballots, vote. We know the
opposition is winning, and then something happens on a
technicality, and it’s awarded to the ruling party. There was a
sense of expectation, there was a sense of fear. You can imagine what it’s like,
as I said, up to that point in my
life, I had not known any other party. I had not known any other
president. I had not known what it meant to breathe freely. So
you can imagine in December 2002 when the the Election Commission
announces the opposition has won the election. Euphoria doesn’t
begin to describe it. Suddenly, the horizons look different.
Suddenly, the political possibilities look different. We
believed that something amazing was on the horizon. In 2005, we had a referendum,
and we said we want to get rid of the colonial constitution. We want a new constitution that
better reflects our values as a modern nation. The ruling party, which used to
be the opposition, turned around and said, “Actually, we want to
protect some of the executive powers. Actually, we want to
keep some of the good stuff that you gave to the former ruling
party. We kind of want it for
ourselves.” There was a lot of fragmentation. People defected.
People left the ruling party. People left the opposition. They
came back and they went to referendum, and 58 per cent of
Kenyans said no, we want to change the whole thing. Stay
with me. This background is important. Fast-forward between 2005 and
2007. The opposition has just won an
amazing referendum against the ruling party. They’re fired up.
They’re excited. We have an election coming up in
2007, and everybody thinks we’re going to get them this time.
What does the government do? They start to crack down. They
start to crack down on the press. In 2006, the officers of the
Standard Media Group were burned down. Witnesses claim they saw a
man in military uniform setting the fires, leaving the site of
the crime. When he was asked about it, the then minister of
the interior said on camera, “What do you expect? If you
rattle a snake, you can expect to get bitten.” They were not
denying it. The First Lady goes on camera
and slaps around a journalist on live television and holds a news
room under siege for almost an entire evening. Again, reprisals, attacks. This is the climate in which we
went to the 2007 election. And this is again the tl;dr argument
that I make in my research. The 2007 election violence in
Kenya is both the instigator and the
origin of some of the most important
technological developments we’ve seen in the country. By the time that election
wrapped up, 1,500 people were dead, over
100,000 people had lost their homes over a three-month period. What had started in 1992 had
escalated, and escalated, and escalated to a point where it
looked like Kenya was not going to make it. So what did the
politicians do? They sat in a room, they called
Kofi Anan in and said, “Help us fix
this.” On the one hand, what came from that was the National Accords for
Reconciliation and Dialogue, a series of four documents, one of
which is called the Independent Review,
Commission or the Kriegler Commission. They looked back at
the election and said, “You know what the problem is? People
don’t trust the system. If we can just teach people to trust the system, then elections this
in Kenya will go better.” How did they decide that they are
going to engender that trust? Let’s three some computers at
the problem. Let’s make this a tech problem. I hope you can see where I’m
going with this! At the same time, away from the formal strictures, away from the formal
societies, there was agency. There was creativity. There were people looking at
Kenya and thinking, “We can do things differently. ” By 2006, Kenya had one of the
largest diasporas and Africa. 6. 2 Kenyans were living abroad –
at the time, that was about ten per cent of the population
before these were people who statistically would be
considered highly educated, so people with a university-level
education and above. I was part of the diaspora. I was studying
abroad. I can tell you there’s nothing as horrifying – well,
I’m sure there are things that are more horrifying – but for me, the horror of of sitting
thousands of kilometres away trying to get information about
your country, watching the news every day and seeing all of
these horrific stories about people being killed and
disappeared. January 2nd, 2008, the worst
incident of civilian-inflicted violence in Kenya in which about
38 people were burned to death in a church in Kiemba. Can
you imagine what it’s like to be thousands of kilometres away
from your thereby and from your friends, and reading that news? Not being able to get
information from that. What did this diaspora end up doing?
People organised online. Concerned Kenyan writers, people
went and started blogging with a
frenzy that has not been matched since. Remember these are people
who statistically would be considered highly educated. They
were people who are able to speak into the reality of the
country but also to process that information for the benefit of
both an external and an internal audience. At the time, the local
papers, the local TV, now you go online, and
you have watch Deutsche Deutsche
Weller for free. Sometimes, they updated the websites. Sometimes
they updated the websites. Sometimes there were pictures,
sometimes there were no pictures. This was the local
press. Blogging became a substitute for
the press. Social media, Facebook had
launched in 2006. A lot of people had started to use social
media, but we had a major uptake coming in the shadow of, “I need
to keep in touch with my family”. Another important thing
that had launched in 2006: mobile money.
Now, if you live in a country where credit cards are all the rage,
where now you take Venmo and PayPal for
granted, and you have to think about Kenya as what we call a
dual system. I live in Nairobi, so do my
parents, but my grandparent live on the other side of the
country. My cousins live on the other side of the country. When
the violence breaks out, suddenly, it means that I can’t go to to
my grandparents. I can’t send them money, but they’re
dependent on those returns, on that money that I send them to make
ends meet. What mobile money does in a
vacuum like this is that it makes it possible for people who live in a dual system
to keep communicating, to keep reaching out to each other. We
are finally able to keep sending money, even though I physically
cannot get there because of what is happening in the country. So you get the sense that that it’s the best of times,
it’s the worst of times. Sure, the country’s falling apart,
sure, we are anxious and we are frustrated, and we’re angry, and
we are confused, but we’re not just sitting there and taking
it. We’re coming up with things. We are developing things. We’re
using existing platforms to build new things. And, as I said, this is my tl;dr
argument. You cannot separate what
happened and in what I call Kenya’s first
digital decade, 2007 to 2017, from what
happened in 2007. Politics and technology in Kenya are
inseparable. The aftermath of this era of
confusion, as I said, is a significant
amount of innovation. If I say the phrase “Silicon Savanna” has
anyone heard what I’m talking about? This massive wave of
people trying to encourage on the back-up success
of the all of this strong web presence,
people trying to incue bait a system
whereby tech is king. We don’t make things in Kenya
necessarily. We don’t have oil. We don’t have – our biggest
sector at the time is tourism, service
industry, so maybe this is an opportunity for us to do
something different. And so you have this massive
movement of resources towards making that happen. But at the
same time, you have all of these people who are encouraged
and inspired on their own creating,
making, trying to make space for a new
political narrative. By 2017, there were 12 million
Kenyans on Facebook. When I say that number here in Germany, you might look at me and go,
“Okay, there’s like 12 million people in Berlin. What’s your point, lady? ” In 2015, there were five
million Kenyans on Facebook. And in 2013, there were only
about two million Kenyans on Facebook. What does that graph
look like? The exponential uptakes of
social media. Right now, the biggest social
media platform in Kenya is not even
Facebook, it is WhatsApp. WhatsApp is biggest social media
platform in Kenya is not even Facebook, it is WhatsApp.
WhatsApp is what we call “dark social”. It means it’s a social
network that generates – I can’t see your connections, I can’t
see who you’re talking to but what you’re doing on WhatsApp
generates internet traffic. Why am I throwing these numbers
around? What happened between 2000 and
2017 is a kind of collision between what people were doing on their own, away
from the state, and what the state was
trying to do that culminated in the 2017
tension. As I said, on the one hand, you
had a massive wave of people moving towards online platforms, first
blogging, and then substituted into social
media – micro-blogging. You had a a massive uptake of
people using social media as a
substitute site for demanding accountability, demanding good governance, building new
communities and ways of being. My favourite example of this is
what is what it called #my dress my choice. Has anybody seen
this? In 2015, a young woman was
beaten and assaulted in a street in street
in Nairobi for “being inappropriately dressed”. Stripping unfortunately in
itself, not an unusual thing. It happens in Kenya, especially
during moments of political unrest. One of my former law
professors calls women the canary in the coal mine of
political unrest. If you look at what is happening with the rights of women in any
society, you can get a pretty good reading of what is
happening in the society in general. So, when spiking – when you see
a spike in stripping attacks against
women in Kenya, we know that things are
going badly. So she gets beaten and
assaulted. But, for the first time in 2015
– unlike in 1997, or in 1992,
there is YouTube. There is Facebook. There is Twitter.
Yeah, there’s a lot of bad stuff, there’s a lot people who
document such attacks for the titillation of audiences, people who film
violence for entertainment. But there was also a focus group
of women who had been on Facebook, and they had started out as a mothers’
group, organising car pools, and
keeping the neighbourhood looking good, and they said
enough is enough. I’ve just received this video of
this woman being assaulted, and I have had it. I need something
different to happen. We need to have a march. We need to have a
protest. We need to have a petition. And all of these
things happened, and, for the first time, at least in
my living memory, the government was forced to respond. We have assault laws in the
books. In cases like this, they’re very
rarely enforced, but because these women, this group, grew
into a movement on Twitter, on Facebook, into a petition
with over 5 million signatures, for the first time, the
government could not pretend that it did not know what was
happening. So My Dress My Choice becomes
the first case at least in my living memory whereby the a a — the director
of public prosecution assist forced to bring a case against
the person who was identified courtesy of a video that was
taken and posted on social media. This is just one example,
justice for Fatouma, Justice for Aisha. Women in Kenya who have
found themselves shut out of traditional media platforms have
turned to social media, have turned to digital platforms as a
place to tell their stories, as a place to keep institutions
accountable. That’s good. We’ve seen people pushing back
against the narrative of ethnic politics. Oh, Kenya is just
tribalism. That’s what the narrative is. This narrative
makes people inactive, because they think surely if
everybody else is doing ethnic politics, why should I try and do things
differently? If this is how we’ve always done it, then this
is who we are. But suddenly because you see people building new networks of support
of inclusion, the narrative starts being challenged. One of my favourite hashtags on
Kenyan media is called Ecokozike. Where
people ask for work. I have a BA in architecture,
basic been out of work for a year and a half. Can anybody
offer me an internship? There are people who have gotten jobs
and resettled after election violence. There are people who
have gotten all kinds of support outside the traditional bounds
of ethnicised politics. This is important. That is the the good
news. You probably have a vague sense of what the bad news is.
It’s really interesting to be having this conversation in
Germany also, because I think you guys are
experiencing what we are experiencing, that, as easy as
it is for the good gills to find each other, it’s just as easy
for the bad guys to find each other as well. And people are building new
networks that incubate hate speech and violence against women, that incubate
violence against people who are of religious or ethnic mine
contracts. It’s tough to be a Somali Kenyan
on Twitter today. It’s tough to be of a religious minority in
Kenya today. Because of the speed and the
frequency at which hate speech flies. There is another layer to this,
and, ask you go into the European
parliamentary elections, this is something you should be paying
very close attention to. What happens when people realise
that money can be used to shape how
people behave online? When I say the name “Cambridge
Analytica” people often think about Brexit and they think
about the Trump campaign. What if I told you here that
Cambridge Analytica has been active in Kenya since 2011? That
they were actually instrumental in the 2013 Kenyan election that
brought into power the Jubilee
Administration? As I’ve mentioned that the 2017
election was the most expensive election in the world. That’s
the formal spending. One survey found that an average
presidential election candidate in Kenya spends US$50,000 on their
campaign. In 2017, the Jubilee
administration spent US$60,000 on Cambridge
Analytica alone. It was one of three different
corporations that they employed, British corporations, American
corporations that they employed to run their social media, their
digital presence. The opposition also invested
greatly in data analytics in using social media to influence
public opinion. At the same time, we had a great deal of
the, a great number of the issues that had come up in 1992, in 1997, in 2005
remaining unanswered. What would be the role of the
media, right? What story would the media tell?
One of the first things that happened after 2007 is the media saying,
actually, we don’t want to be implicated in creating chaos in
Kenya, and so we are going to take a step back. We’re not
going to be as critical, we’re not going to be as analytical,
we’re not going to be as pushy as we need to be. And so social media blogging
becomes a surrogate space for creating
information and disseminating information about what is
happening, about the political conversation in Kenya. What
vulnerabilities did that create? What does it mean when people
get their political information from
online sources? It means you don’t have any of the things
that you take for granted with traditional media. There is no
verification. There is no fact-checking. There is no proof. Anybody who
can afford it can buy an IP address and set up a website and
claim to be generating the and
disseminating political information. For us, what that
meant is that we stumbled into an election in 2017 in which
people were not reading from the same script. Look, we can disagree all you
want about political outcomes. We don’t have to agree on
anything about politics. But, for a society to function,
we have at least to agree on the facts. We at least have to start
from the same place. But if I’m telling you that
vaccines are good, and you’re telling me that you read on the a website run by Dr
I Just Made This Thing Up and he told you it was spreading
autism, then we have a problem. Because I can’t argue with you. This was a huge problem in 2017
in Kenya. It was that people were not operating with this same facts, and
therefore the political discourse becomes meaningless. It’s just people talking past
each other. Overall, the lessons from
Kenya’s first digital decade have been number
one agency. We all have the power to shape
how technology functions. We’ve given far too much room
and far too much leeway to tech
companies to determine the parameters of this
conversation. Just because, and I don’t know if I should say
this because I think they’re a sponsor, oh, well! Just sponsor,
oh, well! Just because Google says “first do no harm” doesn’t
mean that you just have to take their word for them. We have the
power to keep these companies accountable for what
happens next. The first thing that we realised
is very quickly in the Kenyan
political process is nobody really cares what happens in
Africa. None of these tech companies
really care about what happens in Africa. Twitter probably didn’t set out
to play a starring role in keeping
the independent electoral boundaries commission in Kenya
accountable, but it did, because of the agency and the creativity
of Kenyan people who decided to use whatever platform was
available in order to make accountability
happen. Hashtag WhereISMy Form 34B. When the announcements were made
of the polling station, they went to Twitter, and they said,
actually, whatever is being reported on that website, it’s
not what was announced at my polling station. That result is
inaccurate. That is agency. That is creativity. The other thing
that we learned from our experience in Kenya’s first
digital decade is that money and mocks,
and politics, when you mix money, technology, and application,
without any framework or oversite of accountability,
you’re asking for problems. We threw technology at some of
the most complex social and political issues that our
society had faced. We said to ARKofi Kofi Anan, if we put computers
in, it will engender trust and it will be okay. We tried to
absolve ourselves from the very important questions that needed
to be asked about our polity. Of course, it back Scotland Yard
spectacularly. Spectacularly. $28 per head, and that was just
the first round. Keep in mind, we ended up doing
it twice. And what did we get for it? I would argue that Kenya is more
uncertain today than it has ever been. I would argue the
political system is more compromised today than it has
ever been. And I get frustrated because I look at the Gambia, which voted out a
22-year -old autocracy using marbles. They counted the marbles and
said, yes, this guy won. It ended a 22-year-old
dictatorship using marbles. The lesson in Kenya is throwing
technology at your social and political problems is not going
to fix it. Throwing money, especially, at
these problems, without interrogating the source of that
money, and the interests of that money creates far more
problems than it solves. You’re sitting there thinking, this
sounds like a whole lot of things that Kenyans did to
themselves. None of the companies that have
been implicated in the complications
and the frauds of the 2017 election are
Kenyan. They are French, they’re American, they’re British, they’re from
the the United Arab Emirates. What has happened is that we
have made elections a a massive
profit-making industry. That has created parasites, and
it has created an opportunity for what I call digital
colonialism. People leaving Europe, leaving the Middle East,
leaving North America, going to some of the most
fraught social and political situations in the
world, influencing political behaviour for a quick buck. That is exactly what
colonisation was for. It was using other people around the
world to make money here. It’s really important for me to make
that point in this room, because I think sometimes, as I said, we
tend to think of what happens around the world
as an abstraction, right? As long as it’s happening over
there, then I don’t have to sit in the discomfort of what is being done
in my name. Well, I’m here to tell you, as a
person who has lived through some of the ugliest chapters of
her country’s history, that you do actually have skin in the
game, and you do have things that you can do. Here, in Here, in Europe, that
would help us halfway around the world. What happens in Africa,
what happens in India, what happens in Brazil, what happens
with all of these tech companies that are fundamentally American
tech companies, is something that you should pay attention to
to, is something that has an impact for us, and it’s
something that you can help protect us
from simply by paying for attention and
asking someone sitting in the discomfort of some of these
issues that have come up. I know it seems like I’m telling
a very sad story, and and, in some
ways, it is. I was definitely a much more
optimistic person in January 2017 than I
was in December 2017. I feel like I’ve seen, as I
said, the best and the worst of human
nature over a 12-month period. I saw people using social media
to keep the police accountable for
police brutality. I saw people keep the
independent election boundaries commission accountable for the
fraud that was attempted in 2017. But I also saw people
disseminate some of the ugliest hate speech I’ve ever
encountered in my life on social media. I’ve seen technology build and
destroy. And I want to leave you with that particular note. I’m not a tech optimist or a
tech pessimist. Really, I consume, the same as
anybody else. What I am, I hope, is a tech
realist, and, as a tech realist, what I
can remind you is human beings are part of the equation and
must always be part of the equation. We can’t use
technology to absolve ourselves from some of the more difficult
conversations that we need to have about the kind of social –
this the kind of societies that we want to build. We can’t
pretend that the decisions that we take in how we build
technology and how we deploy technology in one part of the
world, or in another part of the world will be are not connected.
We can’t pretend that our agency and our creativity doesn’t
matter. I leave you with one story from
the Supreme Court proceedings, and it’s one of my favourite
stories to tell, because I still can’t believe that it happened. The Chief Justice in Kenya is a
very stoic man. He wears his glasses – anybody who wears
glasses knows this look, when you look at people over your
glasses to intimidate them. And in about the second day of the proceedings, the lawyer for the
the Independent Elections And boundaries Commission was
commending. He had – evidence had been
submitted that basically said what they
were presenting was inaccurate, it was untrue. This is illegal. hallenged. He had – evidence had
been submitted that basically said what they were presenting
was inaccurate, it was untrue. This is illegal. Lawyer said said, “Your honour, I want you
to ignore that. Whatever was used for the
international reporting for the results, for
swearing in, ignore it, it’s not real.” The Chief Justice did
that thing with his glasses where he looked over
he said, “If they weren’t the official results, then what were
they? ” The lawyer goes, “Um, um, um
… they were just statistics.”. as a person who had been
watching the results, it was kind of like, you know that
moment if you watch Friends – I know it’s very popular in
Germany – when Ross says, “We were on a break”
and you just can’t believe that someone would have the temerity
to say something like that, that was the moment for us as those
of us who were watching the proceedings, where we realised
that everything that had happened,
everything that we had been sold was a a
lie if the lawyer could stand in front of a Supreme Court judge and say, “They were
just statistics. Pay no attention to them.”. thank you
for your time. [Cheering and applause]. Thank you for your time.
[Cheering and applause]. – Nanjala Nyabola, your
applause! [Applause]. I think we have more than enough
final for a Q session for questions from the audience, so,
if questions arise, there will be a microphone in the middle of
the room. If I see your hands, I can point the guy with the
microphone to you. Do we have questions so far? Yes, in the front row here to
the left. – First of all, thank you for
your inspiring talk. I have a question: in what way
do you think is it important to create more access to venture capital in
African nations in order to give entrepreneurs from the area the possibility to
create technology solutions that are funded by members of that specific society
as opposed to international
corporations abroad? – Is this a question about Jimmy
– it’s a German company that is billing itself as an African company, and it’s
raised a lot of questions about funding in tech. I’m not a
venture capitalist. I don’t know anything about
venture capital. I think that I’m really more of
an activist, and and my impulses in
activism is to begin with a point of solidarity. One of my favourite quotes is if
you have come to save me, then you are wasting your time. But if you have come because you
realise that your liberation is bound up with mine, then let us work
together. [Applause]. I think a willingness to get in
the trenches, I think a willingness to be in solidarity
with the corporations, or the individuals, or the innovation
people that you want to work with. I think a willingness to to
absolve, absorb risk and take authentic
risk rb risk and take authentic risk
is a thing to this about. I’m not a venture capitalist. As the kids say, YOLO, try it,
see what sticks! – Okay, more questions
questions? – Have I answered all your
questions? questions? – Yes, here. To the left. left.
– Thank you for your very balanced talk, and it was, as was said, very
inspiring. I liked how you talked about how Europeans as
well have their skin in the game, and, when you said that, I
was just asking myself what do you think is the role of
Europeans, how can they or we dismantle some of these
structures of digital colonialism, as you described?
– I love that question so much. I think it’s such an important
question. Well, the most direct route,
actually, has a little bit to do with the venture capital
question, is shareholder accountability. Many of these
are listed companies. And, really to start to ask questions
about where your money goes. If you actually start to pass
through what your pension fund is investing, to pass through
what your – you know, some of these investment vehicles that
are presented as neutral things, if you really start to ask
questions, then you end up with a lot of really interesting answers as to how the the the
biggest industrial complexes get funded and financed. But when it
comes to this particular question, it’s to start with the
shareholding, and start to see, well, is there a way that people
who own parts of these companies can be forced to pay attention
to what it is that these companies are doing? Some of
them are not listed corporations. Some of them are private, you
know, like Cambridge Analytica was a private corporation.
Support investigative journalism. Support people who
are actually trying to hold a lens up to these stories. As a
Kenyan writer, one of my biggest points of frustration is this,
is that I would love to be able to tell all of these stories. I
can’t afford to tell these stories the way that I would
want to tell these stories, and I see things, and I’m paying
attention to things, because I live in that context that a
person who comes to Kenya three weeks before the election talks
to five people and then goes back to Berlin, or Paris, or
whatever, is not going to be able to see, and so really to work as I
said in solidarity with some of the people who are actually
trying to keep people accountable in this particular
this particular context, with the European election, with many
of the Kenyans companies – Kenyan journalists knew some of
this stuff was happening long before Western journalists knew
it was happening, but nobody paid attention to the Cambridge Analytica story in
thoroughly detail because the Kenyan journalists were scared because
it is a scary context, and intimidated into not reporting
the story, and the Western journalists didn’t see this this
tech politics conversation as something that was worth engaging with in
detail until the Brexit vote, until the Trump
vote, so, when I say that that I see two things that need to
happen, one is we need to be better at talking to each other
as equals, but on the other hand, I think we need to to shift the central narrative
referent object of how we tell stories about the world. Everybody came to Kenya looking
for violence, and a violent
election, and when the violence didn’t emerge
the way they thought it would have emerged, there was a sense of, “I don’t
know what is happening, I don’t know how to tell this story.”
That to me is a good opportunity to look at a local journalist,
and say, “What is the story here? How can I tell this story better
?” Have I answered your question? Thank you so story
better?” Have I answered your question? Thank you so much.
Here. – Another question in here in
the middle on the sixth row. – So, you know we don’t know
much about Kenya here in Europe, and and
what we hear about Kenya is good news, it’s told us that there is
good news happening in Kenya. Kenya is leapfrogging, Kenya has
done things others haven’t done,
Vodafone, which is also a telephone provider here has invented mobile money, and
and Mkoba has electrified more houses in Kenya in a few years than the National
Grid in decades. Is that narrative we hear about
Kenya, this hooray narrative, is that your narrative as well? – Sorry, I had water in my mouth
mouth! I’m not a tech optimist or a
tech pessimist. I like people. I like to pay attention to what
people are doing, and and I think this is not a Kenya
problem, this is not an Africa problem, this is a global
problem. I think tech companies have been very good at selling
snake oil. And they’ve been very good at
building narratives around why they exist
so that Google is not just a search
engine but is, you know, connecting people and fixing the world, and whatever,
and we have become so enamoured with
all of these beautiful complex, oh, my
gosh, they’re saving world narratives, that we’ve become
less good at paying attention to what they’re
actually doing. I think that we have a situation whereby tech has enabled a great
deal of creativity, and a great deal of innovation, and in many
ways, as I said in the beginning, Kenya is an
outstanding example of how, given the right tools, and given
the right means and platforms and whatever,
people can do amazingly creative things. But it’s never really that
simple. I think we have a lot of
challenges, and I like that you picked
Mkopa, because I think if you talk to people who work in tech
in Kenya, their version of the Mkopa story is a little
bit different from the story that is
told. To me, that is really a
reflection of how we as the public have
stopped asking or expecting things to be boring
but clear, and direct, and efficient. We like the snake
oil. We are the hipsters who don’t
just want to eat cheese but we want to eat artisinal that has been farmed
to pasture, cattle that gets a massage every night before they
go to bed. It’s cheese. Just eat the cheese or don’t eat
the cheese. That’s where I land. There are a lot of amazing
people doing amazing work in Kenya. There are a lot of really
terrible things happening in the tech space in Kenya as well. I
hope that answers your question.>>Thank you very much. Nanjala
Nyabola. [Applause]. . – Just a little break, and then
we have a different planning. So, originally scheduled was a a
session called Fairness and competitive in a digitised world at 1. 45 but this session has now
shifted until tomorrow to 12030 to 1230, but instead coming up
next is a speech called The Algorithmic
Boss with Alex Rosenblat. Thank you very much.
. .
. .
. .
. .
. .
. .
. . –
TRANSLATION: Welcome back to subscribers stage 1. I hope that
the lunch table was good for you. The second day of
republican republican. There’s – day of re:publica, a certain
heaviness. The next talk is in English. If you want to relax
and don’t want to run the translation machine in your
head, and, translators, they will be there in your head
instead. Applause for the translators. [Applause]. The next scene that we are going
to present here, we know this scene. You enter the office, your boss
might be there. Maybe he even worked through the weekend. Your boss has a a bad mood, and
you realise how everyone has a bad mousse in the office. You think to yourself, okay,
this is annoying with this boss, but at least they’re human. If your boss is an algorithm,
then algorithm, then it can get more complicated. That is what this next talk is
about – algorithmic bosses. This woman that we are welcoming
to the Stage 1 is an author from a
book, book, Uberland – how algorithms
rewrite the ideals of the office.
[Re:publica theme music]. [Applause].
>>Hello. Thank you for coming. Can everybody hear in the back?
Good. My name is Alex Rosenblat. I’m the author of Uberland –
Howe Algorithms are Rewriting the
Rules of Work. I hate PowerPoint slides but
they’re images of the the book, I promise! I’m a researcher at the Data and
Society Research Institute which is based in New York City. I’m
going to give you this speech in two parts parts: one part is a
speech I was planning to give you, and then I changed my mind. I will also give you part two.
They require different lines of thought, but I want you to think
about both of them. First, I’m going talk about a
cool taxi service, but then I’m going to talk about technology
and shifting powers of governance in the United States. Between 2014 and 2018, I
conducted qualitative research with hundreds of Uber drivers across more than 25
cities in the United States and Canada, and across online forums
with memberships of about 300,000 drivers collectively. Although this workforce is dig
aggregated, and they have no normal way to speak with one another
through Uber, they’ve forged a workplace culture online in
Facebook groups, WhatsApp, dedicated message boards and
driver websites. Every day for about four years, I would sit
there in their forums reading their posts, screen shots, in a
digitally mediated workplace, there is so much evidence that
can be collected and shared about the changing circumstances
of your working conditions. Algorithms are a really
important part of Uber’s employment model. Algorithms are simply rules that
are enacted by code. The The search algorithm is the
most familiar one. When passenger demand for rides outstrips the local supply of
drivers, Uber surge-pricing algorithm goes into effect,
multiplying the price. Some drivers describe it as a
herding tool because they get notices
and maps that light up their apps saying
go to to where there is high demand in the hopes that you
will receive a higher pay premium for your work, although
there is no guarantee. One driver, after spending 20
minutes relocating to a surge-pricing zone received a
request from the dispatcher for a trip outside of the surge
zone. There was no premium. He cancelled, and the next day
he received an email from Uber threatening to deactivate him, a technology
word for suspend or fire him” because he was accused of surge
manipulation. That’s important because drivers
are billed by Uber as entrepreneurs who can be their
own boss, someone who can make an informed decision about the
takeaway work they’re willing to do might be able to reject a trip that doesn’t
come with a pay premium, especially after their
algorithmic manager suggested they should relocate for just
such a purpose. But the lines of control are
quite blurry in a system of algorithmic management. Uber’s more infamous CEO
described the surge-pricing algorithm like
this “Uber is not setting the price, the market is setting the
price. We have algorithms that determine what the market is.”
The implication is strong. Uber has the technology to
reflect the dynamics of the marketplace, the
technology itself is neutral, the connective tissue that
matches a passenger with a driver, and that’s very
important, because when Uber identifies as a technology
platform, it’s suggesting that it is simply a
neutral intermediary like a credit card
that facilitates interactions. I found in my years of research
that Uber has the ability to manipulate the conditions of
work and the conditions under which passengers accept rides as
well. For example, computer scientists
at North Eastern University found
that passengers standing in the same surge-pricing zone might
different surge multipliers. Your fare will cost three times
as much as the market rate. I’m standing next to you, and
somehow mine is only going to cost 276
times the market rates”. Uber describes this finding as a bug.
But I would suggest that it is part of a larger structure under which
people can be targeted for different prices and different
wages. The dynamics of price discrimination are quite
familiar across lots of different practices. But they’re
contradicted by the idea that technology is offering a neutral
service. The tension here is trading on
assumptions about technical as neutral to quietly manipulate prices and
wages. Of course, the exculpatory
language is fascinating. When Uber says passengers will
pay what they are willing to pay by route
rather than standard rates, it’s called
price administration but Uber describes it as an innovation in
artificial intelligence. Again, and again, we return to the
language of technology asking us not to believe what we see but
to see something different there. Uber promotes drivers as
entrepreneurs who can be their own boss, but in my research, I found that they do
have a boss – an algorithmic one, that enacts the rules Uber sets for how
drivers should behave on the job. Drivers don’t have a human
supervisor, nor do they have an employment handbook, and this is quite common across
the gig economy sector, because there’s an intense debate over
misclassification, whether workers in a gig economy who
find jobs through an app are properly classified as
independent contractors rather than as employees who might be
entitled to workplace protections, so nun gets told, “Here’s how you
should do your job” from day one. There’s no employee
handbook. Instead, they get messages, as
drivers do, such as passengers give the
best ratings, or five-star drivers
behave in the following list of ways, and from that notices
about how drivers are suggested they behave in order to earn high ratings, that’s how a –
that’s how a lot of how Uber might communicate, how exactly
you should behave on the job. Of course, you don’t have to behave
in the way that Uber suggests, but the rating system is actually a
management enforcement tool tool if your
rating drops below the the threshold
set, say 4.6 out of five stars, you risk being deactivated,
suspended, or fired. For drivers, it was unclear to many
passengers what the rating system stood for. If you’ve taken an Uber, a Lyft
or something like it, you’re prompted to rate your experience
on a scale of one to five stars. For a lot of people, they get
from A to B, all their limbs are still attached, they might say,
this is four out of five stars. This wasn’t the most
mined-blowing trip I’ve ever had, but it got the job done.
For drivers of four out of five, it might be a failing grade. So they go home and be animated
by anxiety over which passenger gave them the bad rating or an
unfair rating perhaps for circumstances beyond their
control such as when there was bad weather or surge pricing was
in effect. There’s also the quite
interesting phenomenon that, although
drivers are narrated as entrepreneurs who
run small businesses, Uber might suggest to them, you know,
riders count on Uber for a comfortable and relaxing
experience. They prefer drivers not to promote other businesses
during the trip. Since drivers actually earn referral bonuses
through affiliate codes from referring passengers to new
services they haven’t tried yet like
Lyft, Door Dash, the variety of local offerings in different
cities across and beyond the United States and Canada,
stopping yourself from suggesting another business or
promoting another one will directly hinder your own
business interests as a driver. You won’t earn that referral
bonus. Although Uber has made some improvements in their
rating in the last year or so, they might now receive ratings
protection, so that their rating that they receive for
circumstances beyond their control leek
weather no longer affects their average, but that took many,
many years to get to, and it wasn’t the feature of their
workplace during the times that I was observing drivers. And the funny thing is, although
you can introduce new features that might improve the driver
experience, there’s absolutely no guarantee that those features
will still be in place six months later. Drivers might get a pay
increase, and then half a year later, the rates at which they
earn per mile and minute are cut. If you have a contract, your
employer can’t simply experiment with the different features at
all times. There’s specially not if you have a contract as an
employee or if you’re part of a collective bargaining agreement,
you agree on what the terms will be. In a workforce where drivers
really have few employment protections
because they’re I understood contractors and in an environment mediated by a
technology app, you can constantly experiment with the
conditions of that workplace. I think that’s actually
complicated from open media coverage of these companies as
well, because they might generate positive credit for
introducing a new feature, but that feature
might be taken away leading to, for example, driver protests. For a lot reasons, having an
algorithmically managed workplace is quite efficient and
practical. Setting a route with a GPS-mapping algorithm saves
the driver a lot of trouble because they don’t really have
to know where they’re going to start doing this work, although
it helps. But there are policy consequences for the no
following the recommended route that is generated by the
GPS-mapping algorithms. If you deviate from the route
recommended as the passenger makes a complaint, you might notice
that, as a driver, there is some missing pay from your pay stub.
You won’t be directly alerted that your pay is being changed.
You have to catch it and try to email with remote customer
service agents who might be located in the Philippines over
your missing pay, and what you can expect is an exchange of
three to six emails going back and forth with template
automated responses, so much so that the drivers started to call
these customer service agents Uber’s robots. One driver was a bit savvier,
and he, along with what many others had started to do had a dashcam when a
drunken passenger pass saying turn left here, right here, and
don’t do what the map says. The next day, the passenger woke up, emailed Uber and said this this
driver took me on a ride. They didn’t follow the correct route.
Uber said you’re right, and retracked the wages. But this
driver had this dashcam footage which he adopted because
drivers largely perceive that Uber
favours the passenger in disputes and Uber doesn’t monitor the the qualitative
experience. He was able to get his money back, but for a lot of drivers, it’s a
daunting tasks, and you may not benefit in the end from spending
an hour writing back and forth with unfeeling customer service
agents when you could simply do another trip. It’s a picture of this workplace
which why it may scaffold potentially
low-wage theft. Even though drivers are supposed to be their
own boss, Uber can reduce the control they have to how they
respond to algorithmic dispatches. In other words, they can’t just
choose the job that works for them. For many cities in a long
time, Uber automatically dispatched Uber pool trips, the
carpool service to drivers who really wanted to work for
Uber X and do the same for truck drivers who made the capital
investment of purchasing an SUV to dry R drive for a higher
tier of service. Instead, they might get Uber X
at which Uber earns lower rates for
operating a Prius. Drivers had mixed success trying to opt put.
email with Uber saying I don’t want to be given dispatches, and
sometimes Uber would say, okay, and another time they would not.
They would continue to dispatch more unfavourable trips. A lot of drives dislike the Uber
Poole option. You have to manage the different passengers in your
car. One likes this music, and another one likes this other
one, and they have a conflict. In the end, they give you a bad
rating which the drivers are very frustrating. You don’t earn more for a
driving trip with multiple passengers as opposed to taking
a request from one. These sound like small items but they build
up to a larger case about the kinds of employment control that
Uber can leverage over how drivers behave at work. They in
many ways contradict the idea that drivers are operating their
own small businesses. In a lot of ways, despite
marginal improvements to the feature of driver work, drivers
don’t have the information or power to make informed
decisions. In the most basic example, Uber
hides the passenger destination from drivers before they arrive
to pick you pup. Let’s say they have to drive for 20 minutes
after they accept the dispatch and you only want to go two
blocks. The driver’s losing money on
that trip, and, of course, the driver can technically cancel
the trip if they prefer, but they risk being deactivated by
Uber if their trip rejection rate is too high. And so they
never really have all the information they need. They can
sort of try and make decisions in the margins of their
experience. Interestingly, the the policy of
blind passenger acceptance was touted early on by Uber as
something that could resolve a long-standing issue of
discrimination, for people of colour, especially for black men
in the United States, it’s really hard to hail a cab at the
side of the road. They might pass you about and go
to the nearest white passenger. Uber’s technology was promising,
as Silicon Valley often does to resolve a long-standing social
issue of discrimination. And what I found in my years of
research with drivers is that it was part of a larger policy of
algorithmic management. When McCall, an Uber driver was
driver one day, she had the passenger who called her the N
word. She stopped the trip early and Uber retracked refracted some of
her wages. Thee received a form letter back
after Uber – Uber said we will be sure not to match this
passenger with you again. She was disgusted, because this
meant that the passenger could continue to ride with other
drivers who were then vulnerable to harassment to deactivation,
and to other consequences. It was a good reminder that there
is a subtle implication to a washing place where so much is
automated, where so much communication is principally
robotic. Uber could have communicated that her reporting
was valuable, that she was there to protect her fellow drivers.
It could have built a core sense of participation from a very
disaggregated driver workforce, that you’re there to protect
each other, and now your employer has your back.
Interestingly, the types of communication that take place,
the quality of them being so robotic, lots of emails and
texts, having no human resource that you can check in with, it leaves a driver vulnerable to
phishing attacks. Text and in-app notifications
may be coming from somebody else. Scholars have highlighted
this in particular, and I also remember an early post a driver
made to one of the Facebook groups, these online forums I
referred to earlier. Someone had phoned him to get
his banking information, “I’m Uber,
there’s a problem with your payment going into your account. Can you please given us the
correct digits?” When the driver posted this incident to a forum,
the drivers were laughing, saying you should have known
Uber would never phone you. And so there are interesting ways
that new efficiencies that are
ushered in by technology also experience soft
losses of social trust and how you fine
and reassure workers and and
employers about how their bond will be secured. There is also
of course material benefits to automating work
flows, and algorithmic management system dispatches drivers, jobs largely based on
the nearest driver gets the ride, and for a lot of drivers
who had previous occupational identities as drivers, like
former taxi drivers, truckers, this was great, because it
seemed more fair. In traditional practice, you might have to tip
the dispatcher to get more favourable rides, so this seemed
like a really great option. My colleagues and I at Data and
Society started to compare the Uber model which has often been
the symbol for the gig economy and disruption from the Silicon
Valley with other market places that exist for employment, and
in particular like various student colleagues noted that
women were a big missing piece of the gig economy
conversations. Professor Juliet Dacona and
another looked at how – cleaning
workers, and we Qom paired it what I was seeing
as well. Sites like care.com which have over 10 million
active profiles compared to the 900
million active Uber drivers at work each other, are quite a
significant part of the gig economy but they’re often
understated in media coverage. We found a very important
distinction between these two work places. Care.com workers
would have to message back and forth with potential families, employers, a whole bunch of
times, like online dating, just to try and get the job. With Uber, you had an
algorithmic dispatcher who got you the job, even if its details
could be unfair. A lot of drivers will say they prefer
this feeling of independence that comes with an algorithmic boss
compared to a human manager who might be looking over your
shoulder, but when something goes wrong, they finally have
very little recourse or source of redress. A faceless boss can also monitor
you in more granular ways. Uber monitors when drivers brake
too hashly, or accelerate too quickly, or when a passenger
claims that they are drunk. It might not be uncommon that a
passenger who gets in the car at the end of the bar shift might have the
odour of alcohol or drugs, and the driver takes them to their
destination, but then the next passenger gets in the car, and
says I detect something, it smells off in here. They report
the driver for driving under the influence. Suddenly, the
driver’s lost his job on the side of the road, emailing back
and forth with remote customer
service agents in the Philippines trying to get
reinstated, saying I’ve got dive bees teeth, if I had been
drinking the way the passenger said, I would be in the
hospital. Help me to resolve this issue. A lot of what I’ve
described today is quite a lot about control and
how you leverage it, even if it is under
the auspices of technology. Silicon Valley algorithms have a
mythical reputation of neutral, objective, at even benevolent
tools which reduces the appearance of control. For example, the Facebook news
feed is curated by algorithms, and there is no visible editor
manipulating what news you see. Similarly, Uber tried to
replicate that work relationship in the workplace. There is no
visible manager. But the company establishes the rates at which drivers earn, their
small business income, and changes those rates. It monitors
those behaviour and disciplines them with threat of
reactivation. Uber also bring the culture of
experimentation that Silicon Valley is known for to the
workplace. Starting in 2016, Uber quietly implemented a practice called
upfront pricing which supposedly fostered greater transparency.
Passengers could see in advance what a fare would cost based on Uber’s
best guess or estimate of the trip tally, rather than waiting
for the tally at the end of the trip. The problem was not everything
was so upfront. Months before Uber finally admitted that it
was charging passengers sometimes a higher fare than the
driver was earning, drivers started to
notice discrepancies. They might ask a passenger to
see what they paid at the end of a trip because it didn’t match
the fair they expected. You see, drivers had a contract
and an understanding with Uber they would remit back to Uber 25
per cent of what the passenger paid, plus Uber’s fee, so it’s
usually like 30 per cent. Suddenly, a driver notices the
passenger paid 90 dollars, and the driver earned 30. In some cases, actually, the
passenger paid less than the driver was earning, but the problem is that this
this all happened opaquely. Drivers crowd-sourced receipts
and comparisons in online forums across the country to identify
that Uber had implemented a knew pricing policy that contradicted
what they expected from their contracts. And I think to some
extent this was another feature of experimentation that Silicon
Valley is so famous for. You know, Facebook similarly
conducted an emotional contagion
experiment on unsuspecting users a couple of years ago. Some users were displayed sadder
posts and others were displayed happier posts to see if the
effect of that emotion would spread. When Facebook published
the results of that study showing the effect of
emotional contagion, people were really mad. They were mad because they were
guinea pigs on experiments in their emotional health they
didn’t sign up for, and they were bad because it contradicted
the idea that algorithms that curate so much of our lives are
not, in fact, these neutral facilitative objects
that they can minute us when we are
unsuspecting. Now I’m going to switch gears
will be. As some of you may have heard, Uber is going to become a
publicly traded company on 10 May, and that’s
repositioning Uber at the centre of debates over the impact of
technology on society. What I’ve noticed in my years as
a Canadian in the US is that Silicon Valley is almost a part
of American nationalism. When technology companies make exaggerated claims about how
their technology changes the world, those claims are circulated as
credible because there’s such a strong belief in Silicon Valley
as a fountain of genius, progress, and economic activity. For example, Uber rose to power
in the aftermath of the great recession. And it promised to
scale entrepreneurship for the masses. That was a very appealing claim. It drew on Americans’
valorization of entrepreneurship that you should become a
self-made person, and the time was right. Cities were covered
in blight, people lost their jobs and businesses, and here
was this app offering not only a low barrier to access a job – anyone
could sign up and be at work in under
a week, even – they were also offering a path way to the the middle class in a
country Shane by economic insecurity and employment precarity. They said
come and work for us, the median drivers in New York City
earn upwards of 90,000 a year. It was an appealing claim.
Unfortunately, it was true. The federal Trade Commission
investigated how Uber recruited drivers with exaggerated earnings claims
and and found that they had been dereceiving drivers with – dereceiving
drivers. – deceiving drivers. Uber settled for 20 million
course Lars. The claim was circulated that
drivers can start working for an app and earn $90,000 a career. It was foundational to creating
an environment of regulatory forbearance, where the laws
exist but regulators choose not to enforce them because they
don’t want to stand the benefits and promise of innovation, so
these foundational myths about what technology can do do a lot
to create regulatory arbitrage for companies, particularly in
the United States. I think outside of the United
States, Uber has faced a lot more
backlash over promising claims about how its technology is different from
what it appears to be. It wasn’t until last year that economist
Laurence Mitchell looked at all the different earnings claims
around drivers and their expenses and found that drivers
were actually earning on average about a take-only pay of
10.87 US dollars an hour after deducting the fees, and the
mandatory and social security and Medicare taxes that
drivers must say. It’s not that the US is uniquely
vulnerable to a nationalistic belief in a natural resource. If
a company in Canada where I’m from, I will take some licence
here, made claims that they could offer jobs in the
oil-field that paid tremendous amounts and that also with social
discrimination and would make Canada’s name on the world
stage, I think some of those claims might be credibly
circulated. Every country has foundational myths that people
buy into and give promise and hope to what can be offered. In
the United States, technology and nationalism is inextricably
linked to changing structures of collective governance. On 10 May, Uber went to the
Stock Exchange and valued valued
upwards of $91 billion. The company is asking the public and
lay investors to set aside the fact that it is an unprofitable
ride-hailing company that fact that it is an unprofitable
ride-hailing company that lost $1.8 billion last year. Instead,
it asks us to believe that it is the future of transportation. This is the same company that,
along with Lyft, its major competitor
in the US, insists it’s not in the transportation business
because it’s a technology company. The EU ruled differently on that
where Uber was considered a taxi company. In the United States,
both companies continue to argue that they’re not obliged to
provide accessible services to passengers with disabilities
under the Americans with Disabilities Act unlike their
competitors in the transportation business. That said a lot I think about
Uber’s ambition to be the future of transportation. It’s the kind
where the company can issue responsibilities to civil
society. Technology isn’t just disrupting
how we work, or how we get around, it’s distorting reality,
and reshaping narrative. What I’ve seen through my
research is how much technology disrupts a shared set of facts
and understanding. Uber looks like a taxi company
been but it side steps regulations
designed for for – while Uber promotes drivers and
entrepreneurs and classifies them as independent
contractsors, I found that drivers do have a boss –
abalgorithmic one, to control from a faceless boss is often
hard to pin down, but the effects are evident. For Uber’s
drivers, technology is being used to exclude workers from the
the entitlements they would be –
some youer drives including in Los Angeles and New York, are
planning to go on strike in advance of Uber’s IPO, drawing
attention to pay cuts and unfair working conditions. But Uber leverages technology to understand – to change how we
understand work in the first place. Algorithmic management
isn’t that different from management because Uber
leverages significant control over how drivers work. It sets
the rates, for example. It doesn’t give them the
information they need. That issue of control is at the
centre of disputes over whether drivers should be classified as independent
contractors or as employees, and, across the world, beyond
the United States, this has been a a site of dispute with
different resolutions. When drivers from Massachusetts
and California sued Uber in a
class-action lawsuit alleging they were misclassified as
independent contractors, Uber argued that drivers are customers of
Uber’s technology, just like passengers. That sounds like
word play, but it’s changing the paradigm. Uber drivers may have
to seek redress for unfair working conditions or
wage cuts, for example, not under labor laws but under
consumer protection laws that prohibit unfair and deceptive
practices. It is important to keep in mind that Uber drivers only account for 0. 56 per cent of, less than one
percentage of the labor force in the United
States, but Uber’s cultural impact is far greater because
it’s introducing these fundamental paradigm shifts
about what it means to work or to consume
technology. Uber constantly advances the culture of
technology to suggest it isn’t what it looks like, and that
cultural work is largely successful. In the United States, direct
implications for its business model. For example, Uber is
either misclassifying millions of drivers as independent
contractors, or it is co-ordinating prices for
millions of small independent firms, which
may violate anti-trust laws, against
anti-competitive behaviour. Uber is unique in that it exists
with so many multiple and conflicting
truths but that doesn’t cut both ways. For example, the city of Seattle
tried to extend collective bargaining rights to drivers but
the Federal Trade Commission and the US Chamber of Commercial sided with Uber to to
oppose the mainstayer of the city’s efforts. They can’t bargain collectively
over pay because it’s anti-competitive for a group of
small independent firms to come together and decide on the price
they’re going to charge to consumers. And yet, that’s precisely what
Uber is able to do through algorithmic. Uber’s – and thousand that
becomes entrenched in law and practice is a microcosm for a
larger political battle over power and governance. Technology has changed how
people access information, as Dr Dana
Boyd argues. Tech platforms from Google to Facebook are
structured to personalise our experiences as search engines
and recommendation systems. These structures are raising new
questions about how we arrive at a common set of facts in a
networked media environment, especially because the
vulnerabilities in that information architecture can be minuted manipulated to shape public
knowledge. When there is more uncertainty
about governance in the United States, platforms are
increasingly recognised as proxies for political power. We
appeal to Facebook, to Twitter, or to Google to regulate
discriminatory bias, or speech. On a national scale, what we are
seeing is a cultural France ferns. American culture is anning
itnistic to the perception of government overreach. Now
Facebook is constantly in the news dwarfing countries of
actual countries. The company stands accused of
overreach, for privacy violations and for threatening
democracy. It’s the same American grudge, but redirected
at an entirely different governing structure. It suggests we are looking
towards tech companies to usher in new forms
of governance. Elected politicians sense that
something is amaze, even if they don’t know exactly what to name
the source or the effect. Some are striking back. Republican Senator Ted Cruse and
and Congressman Nancy Pelsoi and
Elisabeth Warren are acknowledging that
tech companies have too much power. Some, like Senator Warren, frame
it was a monopoly issue. Others, like Senator Cruz say
it’s an issue of free speech saying that technical companies
have a bias, in how they curate our media and
information spheres. Silicon Valley companies extended in a
largely unregulated environment. No-one wanted to hinder
innovation. Political leaders, as well as scholars and
advocates proposed to reduce their power by regulating them.
The growing political will to regulate Silicon Valley is
evident, and that may be a direct risk to the business
models of companies like Uber, Facebook, and others. But
technology also changes how we understand what we know or how
we arrive at a common set of facts. For my own research, I’ve seen
that regulatory interventions into
Uber’s model are not co-ordinated or
aligned, regulators – their conclusions are at
cross-purposes for resolving their employment relationship. Between Uber and its drivers,
and for gig economies, workers broadly. For example, the New York State
insurance and appeal board ruled that drivers employees for the
purpose of receiving unemployment benefits. Multiple states have past laws
qualifying Uber driver status as independent contractors.
However, as law scholar Charlotte Garden has observed, how states
define Uber drivers has no legal
effects on determinations under the federal
fair labor standards or the national labor relations act. Of
course, one consequence of a patchwork approach like
challenge ago company like Uber is that the
company might sail through variable interventions into its
labor practices. The challenges professional drivers face to
earn a wage are real. Even if Uber is replaced tomorrow, the
legal battles over what kind of a company it is and whether its
practices are lawful will have long-lasting effects on how
people are employed. In the end, that’s a larger metaphor for
what gets underwritten by the power of a faceless boss.
Technology is raising questions about who holds the power of
governance, over whether it is over the rules of work or the
regulation of speech, and the questions suggest that the
ground has already shifted beneath our feet
for epistemic epistemic epistemicological fracturing. I
will happily await by the side of the stage for anybody who
wants to have a short discussion. Thank you opinion . -.
– Alex, thank you. – Alex, thank you.
– TRANSLATION: For questions,
please go next to the stage, to this means we have a little more time, so so we
have time to find new seat, maybe get a coffee, because it
will take a little while before it continues here – at
least 20 minutes, and, until then, I hope to see this room
filled to the brim. See you later!
. .
. .
. .
. .
. .
. .
. . .
. .
. .
. .
. .
. .
. .
. .
. .
. . .>>Okay. [speaking in German] — ask me
anything. International space — Alexis Hope. MIT Media Lab in Cambridge in
the USA. Yeah. [waiting for translation] lation]-Alexis Hope. [ Applause ] ALEXIS: Hi, everyone. My name’s Alexis Hope. I’m a researcher at the MIT
Media Lab at Cambridge, a furniture dine
at the Massachusetts College of Art and design and a learning fellow at Lego.
Today we are talking about building joyful futures which is
important because the world is deeply messed up. We hear a lot
about the terrible technological futures in store for us. For
many of with us these terrible futures are the present. Technology is eroding our civic,
economic and environmental lives. A lot of what I see the
technology industry put out seems to be making things worse by amplifying
inequalities and systems of oppression that have been around
for a long time. Solving the problems of a select few at the
expense of many. It can be hard to feel motivated to design technologyies against
this backdrop and I struggle with that, but I believe in the
possibility of making better worlds or I wouldn’t get
how out of bed in the morning. Perhaps a Gravitational field
that no one can escape is not
everyone’s favorite. Everyone has seen this, it
represents the peak of innovation, they turned the
world into a massive telescope. It’s quite cool. Many of you have seen this
scientist, Dr. Katie Baughman after she created the
image. Unfortunately, soon after she shared this photo, she became the
target of a sexist backlash that attempted
to strip her of credit for this achievement. Woman does 6% of the work, but
gets 100% of the credit on YouTube, it’s pretty awful.
People trolled through her GitHub to support their sexist
narratives that she didn’t contribute anything substantive
to the project. Even to undermine her role, they
circulated the photo of this man, claiming he was the real
hero behind the image and he wrote 850,000 of
the 900,000 license of code. Until to his credit he quickly
shut them down, explaining that Katie led the development of
this algorithm that was pivotal to creating this image and
further that counting lines of code was a pretty useless way of
measureing scientific contributions. The thing is, many people, both
those that rallied around her and
those tearing her down are missing the point about how
science is done. There were over 300 researchers across so
many different countries who came together to work together
on what should be impossible. And scientific research of this magnitude is widely
collaborative. And doctor Baughman herself
never claimed to be responsible for the
discovery, the spotlight should be on the team and not on one
person. Focusing on one person like this helps no one,
including me. In the media and throughout history, though, we
are addicted to the myth of the solitary genius. It centers on the lone genius,
his or her, usually his, change the world with a radical idea. But this is not how futures are
imagined and created. In fact they’re in community, which is
obscure when had we narrative one person in
progress. Dr. Baughman’s story is complicated. Hero stories
can have positive outcomes. It matters that she’s a female scientist as the story clearly
shows, the obsession with the female scientist. And it’s
created in community and a lone genius is more likely to create
a dystopia than a joyful future in my opinion. She’s a genius.
But she’s not on her own and she would be the first one to agree.
She’s building her joyful future in community with others. We
are at present headed toward a different kind of black hole.
Where progress and innovation march forward, but this progress
is not distributed equally. Unlike the black hole discovery, public and free to all, much as
what we he should as innovations are more convenient for some people at
the expense of others and make
people rich along the way. This is not the kind of future that I
imagine. The future I imagine instead is one where people come
together to care for one another, imagine and build many possible utopias and
preferable futures. A joyful future is when we’re in
community with each other, feel a sense of connection, hope, optimism
and joy. When our needs are met and we
feel respected and free to let loose. All of that said, a joyful
future is much easier said than done. Today I’m going to tell a story
about some of the joyful futures I have been working to build in
the past years. This is a story how we build
joyful futures and how to participate in what could be be.
Like Dr. Baughman was, I’m a research and
student at the M ID Media Lab. It’s a multidisciplinary lab at
the cross roads of art, design, engineering and science. Our
mission is to invent the future. It’s a pretty confusing and
wonderful and strange place to work. On a given day, I might see
colleagues testing out a prosthetic limb they built, creating an opera with a
musical instrument or driving a
microurban car around the halls. And others are engineering
mosquitoes with CRISPR so they don’t carry malaria.
I’m not allowed in the room for the mosquito research in which
is for the best for even. There’s a lot weirder stuff
people do around the building. Usually at night when the mice
come out to keep the graduate students company. And there’s a strong culture of
students working on secret side projects adjacent to the main
research they’re working on. It’s a place full of inspiration
from colleagues from radically different fields and we students need to
find many outlets like side projects to find outlets for the
distractions. The story I’m going tell you about is how one of these crazy side
projects born of equal parts passion and
mischief making became one of the most meaningful projects
I’ve ever worked on. Around five years ago, one of my colleagues, Katherine and I, got
to talking about being some of the few women at the Media Lab.
Today less than 20% of the faculty is women, back when we
were talking about it was worse. Being a woman in technology is an exercise in
perseveres and frustration as you might gather from the story.
It’s not about Internet harassment. Katherine just had
her third baby while finishing up her master’s
at the lab. She was talking about a noisy and painful breast pump and sitting
on the floors to feed her kid. This was a record scratch moment
in the conversation. Wait a minute, here we are at a
highly regarded engineering school and you’re sitting around
using a piece of technology that is super-painful for you, hasn’t
been redesigned in decades and that I literally didn’t know
existed until 5 minutes ago. We started to ask ourselves, if
we’re here at this institution and our mission is to invent the
future, whose future are we inventing exactly? Does this
future not include breast-feeding women and babies
for some reason? And why are we so obsessed with Bitcoin and VR and neglecting
everyday technologies that matter to so many people. So,
in side project fashion, we decided we had to
do something about it. We pulled together a merry band
of engineers and designers around the lab, mostly new parents with empathy
for the situation and hosted a small workshop to talk about how
to build a better breast pump ourselves. At the end of the workshop,
summarizing what we were thinking. And had a line asking for ideas
for improving the pump. We expected 12 emails or something. We got 1200 emails from the
people around the country. It was amazing. And the emails
were long, they were detailed. Multiple page Google Docs with
like 10-point plan for improving the pump, written at 2:00 in the
morning when mom was pumping. We took a pause. Maybe this was
a conversation and an effort we should think about opening up to
more people. We heard lots of stories of many things including
social isolation. So, mother 8697, pumping is so
isolating. It sucks to have to leave
friends or family or go home in the middle of the
party to pump. We hear stories about pain. I love breast-feeding my 8 month
old. I cried at the hospital, early latch issues, and struggle
still to do it every day. If you think this doesn’t seem very
advanced, you’re right. The basic design of the breast pump hasn’t changed since the early
1900s. This was patented in 1914. So, the first make the breast
pump not suck hack-a-thon was born in 2014. Most of you
probably heard of what a hack-a-thon is, but if you
haven’t, it’s not about hacking into computers. But it’s a
problem- problem-solving-focused event where people come together
around a design challenge, make prototypes, eat pizza and don’t
go to sleep for a few days. Honestly our focus on the breast
pump ruffled some feathers in the building. People weren’t
sure if we were joking, if it was a worthy target for
innovation. Or what a breast pump even was. But like I said,
we like making mischief. And one of our goal was to change
our culture of innovation at the Media Lab
and we did it anyway. We gathered 150 moms, dads, mid
wives, designers, engineers and more into a room and for two
days we worked on this design challenge. It was a little
different than most hack-a-thons. We kept the pizza
but encouraged people to go home and sleep in their own beds at
the end of the night which I think is a sound practice. I had never seen so many women
or babies at a hack day on this before. It was a huge success. It was a transformational event
for those who attended. Some decided to go back to graduate
school. It pushed the market to develop
newer and better pumps as we partnered with many of the
breast pump companies. And they were pretty good sports
about being massively called out for having a product that people
didn’t like. Credit to them. And it got a ton of press
because the words MIT and breast in the headline is very
interesting. And that press might have been actually the
biggest asset of all in helping push the culture
of MIT and the broader technology community to consider this topic and women’s
health technologies more generally as worthy of
innovation. So, we were happy at first. But
in looking at some of the ideas coming out of the event which
were for like ultra-smart high-tech pumps
and the price points of new pumps coming to market, up to $1,000 in some
cases. We realized this was not the future we wanted to invent
either. People were left out of the imagining of the future we
created, we all missed the opportunity to raise the bar for
everyone. And working towards joyful futures means that you’re
going to screw up sometimes. And we did. So, we realized
that we cannot innovate for the 1%. We can’t innovate just for
people who can afford thousand dollar pumps. We think about
people who can afford that price point. It’s in the US, people
with largely good jobs who may have places to pump in the work
place. Probably have paid leave. It’s a
rarity. I’ll talk about that as it might
be quite unusual for this audience. And who can afford
the support of a lactation consultant when things go wrong.
We believe that good design and innovation doesn’t leave out 99%
of people and centers the people who face the most barriers. We
thought back to the emails we received in 2014. We heard
stories about all the other things that suck just beyond the
pump itself. For those with countries with paid family
leave. You may be wondering why so many
women use breast pumps in the first place in the United
States. One answer is in the policy
landscape. Here’s an email from 2014 that sad out to us.
Ultimately no pumping technology can overcome the fact that our
society pushes women back to work early and loads of stress, costly
childcare. Until we fix that, no pump is going to change the
landscape of what nursing mothers are up against. Our
reflections about the short comings of the first hack-a-thon
got back to the questions we were starting
with. Whose future are we inventing here and who is
missing from this imagining of the future? Our group of six women grappled
with the fact that there are people who are not listened to in technologyies
and innovation like MIT. And we could do better, bring together
a community that reflecteds a greater expertise that would
center the people most left out of existing conversations about
breast-feeding innovation to work towards our joyful future. It was time for make the breast
pump not suck version 2.0. We started to think about equity
and systems. It’s made breast-feeding into a luxury
good. Babies from well off families are more likely to get
it. Babies from less well off to get
it, and white babies more likely than babies of color. Black
women in the United States face a greater risk of mortality than
white women. And it’s not just public health issues, they’re economic and
social justice issues. What’s faced as
a personal choice for women, to breast-feed or not, it’s not a
real choice if you have to go back to work, you have no
support, you can’t afford the time it takes to do so or
you aren’t given appropriate health information. But we
believe in this quote by our friends at the equity design
collaborative, racism and equity are products of design. They
can be redesigned. This is the title of a terrific article
which I highly recommend. And the main point that they make is that racism and inequity can be
both intentionally and unintentionally designed into
systems and artifacts. Unintentionally when designers
like ourselves fail to reflect on our
biases and make choices how to address them. To work towards our own biases
for the second version of the hack-a-thon, we built
relationships across lines of racial, socioeconomic,
geographic and more. Led by my teammates Jennifer
Roberts and Michaelson, we worked with those working in the communities on breast-feeding
supportive technologies and services. Centers of innovation like M IT
often overlook innovation work communities are doing to make
the world better. And we wanted to use our
institutional resources to uplift and support the critical
work that people are already doing. We worked with these teams six
months prior to the event to help them develop some of their
existing projects into something they could bring to the hack-a-thon that would benefit
from the reusers and people available over the weekend. Each team had talented
innovators and passionate asks for low income
families in their communities in Boston,
Massachusetts, Mississippi, and Albuquerque, New Mexico. And nay they worked on a free
standing birthing center in Boston, the
first of its kind to a self-advocacy
toolkit and in Detroit, and then ceremonial
clothing in New Mexico. To a toolkit for community health
workers in Mississippi. Hack-a-thon were new
for the members of the innovation program and served as key
members to redesign the character model and make it
better. The second make the breast pump
not suck festival drew around 250 people across all the
events. And for 75% who came, it was
their first hack-a-thon. We wanted it to be a beautiful
experience. And I would love to show a clip of those in the
community describing what they worked on at the edge.
>>I think our community innovation teams are the
highlight.>>We have four community
innovation teams coming from Mississippi, Albuquerque, New
Mexico, Detroit and Boston.>>The care is a maternal infant
health program. The focus of our design is to ensure that
families have access to timely lactation care.>>We are a nonprofit working
around breast-feeding equity for women of color. We are
designing a movement, a social media platform.
>>We are working on the laugh pack toolkit which is an infant
feeding toolkit for disasters. Everybody who wanted to had a
60-second opportunity to tell the audience their wonderful
idea and invite other people to work with them.>>My name’s Camille and I’m her
best friend and not a mom.>>My name is Rachel Lorenzo, we
have our prototypes.>>And I count religiously for
three months — carried a full-time. So, when that nurse threw away
the milk. Ain’t nobody got time for that.
>>I’m excited about what will result and also the partnerships that
are formed here. And also the friendships and the networking
and hopefully that will continue after this Ent Ent is
over.>>I don’t know what to can
expect. The joys of the hack-a-thon, people come
together with different ideas and work hard to provide
whatever materials they might need. A Sewing station, electronics
workbench and a whole bunch of materials. I don’t know what
people will do with them and I’m excited to see what they come up
with. ♫ ♫ ALEXIS: So, hopefully that gives
you a little taste of what it was like. Many amazing
innovations came out of this event. Way different than
thousand dollar breast pumps because who was in the room was
way different than the first time. And the room was full of
dedicated people solving problems that they themselves
faced or that communities they worked with faced. One of my
favorite examples briefly shown in the video, this is the
infant ready feeding kit for natural
disasters. The New Orleans breast-feeding center shared
this with us. They scaled up their prototype
from the hack-a-thon into something distributed across New
Orleans, a city that’s faced devastating hurricanes and
floods. This is a really important problem that they
themselves knew about and took it to their own hands to solve. So, given that make the breast
pump not suck 2.0 was our chance to make some serious changes
changes, we thought about the hack-a-thon structure itself.
Its strengths and short comings. We realized we had an
opportunity to hack the hack-a-thon. The strengths of
the hack-a-thon format are numerous. Gathering a community
of passionate people together in person,
pulling together didn’t skills and perspectives. And giving
people an opportunity to be playful and creative outside of
their day jobs. These are amazing things. But
hack-a-thons are not known for being the most inclusive
entitles. And nor for having a great deal
of real world outcome outside of the one or two days that they
usually are structured. So, we decided to rethink everything.
Everything from who typically attends to what happens in the
space to the way people share ideas at the end. So, we made
lots — lots and lots of changes. The first change was
about who was in the room. So, in the United States
hack-a-thons typically attract young white
male technologists. Well, our first hack-a-thon had
many babies and women in attendance, which is pretty
rare, it was still largely a very white space. We changed the way we recruited
and advertised to attract new audiences. A racially and socially diverse,
and beyond just engineering. And we raised significant funds
to support the travel of people who otherwise couldn’t afford to
come to an entitle like event like this. It’s one thing to invite people,
another to make it possible for them to attend. This was
important. Hack-a-thons are competitive spaces. They’re
aggressive and competitive. We didn’t want our space to feel
like that. At our first event, we had typical first, second and
third place prizes. At the second event, 12
event-basbased prizes to take it to the next level. It upended
the competitive atmosphere that you see. Instead of polished
pitch presentations at end like the
CEO of a company which sometimes happens at the hack-a-thon. We
had a science fair format, enabling two-way conversations and
relationship building versus presenting a polished narrative.
Hack-a-thons typically focus on technology alone and not
systemic and structural issues. We expanded our definition of innovation and framed the set of
issues people hacked much more broadly. And lastly, hackathons are not
known for producing innovations that last beyond two days. We made new space for projects
and newcomers and amplifying ongoing
work out in the world remitting to
breast breast-feeding innovation. And prior to the hackathon, we interviewed people from racial
and socioeconomic backgrounds to produce a book that we published
just before the event. And several of the participants in
the interview said they came to the hackathon and they worked on
teams or acted as roving consultants. So, their
experience and willingness to share their stories made the
design so much better and so much more grounded in reality.
And in the book we drew insights from all of the stories of the
participants. We had a section in each
person’s story about what could have been
better to give our hackers and makers some concrete suggestions
based on people’s real experiences. As we made — as
we built relationships to make this new community possible, we
also did a lot of thinking about our own identities. So, we were
a team of six women. And the four white members of
our team started to meet monthly for facilitated conversations
around whiteness, privilege and confronting our
own roles around racial inequities. This is pretty
uncomfortable work to do. It was through our relationships we
built through trying to create something together that allowed
us to go deep and have authentic conversations in ways that we mess up when we don’t
consider power imbalances that are not apparent to us. We held
ourselves and others accountable. This has an impact
on us, the ability to communicate across lines of
difference, on the project itself. And I recommend this to
anyone working across the lines of racial or other different.
There are no easy answers, no checklist. It’s ongoing work
and it’s just a huge part of my practice going forward and I’ll I’ll take it for any
project going forward. Even though the team is M IT
nerds, focusing on technology is not
going to solve all the problems. I mentioned family leave in the
United States, almost nonexistent. 25% of people go back to work
ten days after giving birth. That’s not enough time to heal,
it’s dismal. To complement the hackathon, we had a convening
called make family leave not suck policy summit which brought
together policy leaders to hack paid family and medical leave leave,
inaible to about 85% of people who give birth in the United
States. One thing that happened shortly after the event, we took the carnival
on the road, traveling to Washington, D.C. to share ideas
from the policy summit and hackathon with the Senators.
Many of the hackathon participants joined us. We met
with 16 offices over a couple days,
covering all the different places we were coming from.
Change at this level is slow and paid family and medical leave is
not something that’s going to happen in our current
administration in the United States. However, learning how
to do this kind of advocacy work is a critical
skill for everyone on the team going forward and something
we’re committed to fighting for alongside the technology design
work that we’re doing. Bringing people together to design for equity requires cultivating a
spirit of joy and play which helps people and institutions
build relationships across lines of difference. But I would like
to complicate the matter of joy and play for a moment. It’s not
enough to convene a diverse group of people expecting them
to magically arrive at a radically better future. In fact, doing so is likely to
surface tensions. So, that means you absolutely need to
prioritize the comfort of people who have made to feel unwelcome
in innovation spaces. This is key to
both community building and creative problem solving . I think joy can play as
activities of resistance in toxic times, they
can restore us so we can tackle it
together. I don’t think joy is the escape, it is the means of
building something new. I return to this quote from
joyful militancy, for joy to flourish, it needs sharp edges.
What does it mean for joy to have sharp edges? For us it
means that our joy must not gloss over pain, anger, grief in
service of so-called harmony. It means taking a stand and
having boundaries. Refusing to compromise on certain values.
So, in order to create an environment that was truly warm
and welcoming. Our hackathon had many sharp edges. We chose
to center women of color, which means explicitly not
centering white women in our recruitment efforts and who we
selected to attend the event. We asked everyone to uphold the community agreements so we could
hold each other accountable for how we treated each other in
the space. And we began the hackathon with a workshop on equity in order to
create a shared vocabulary and understanding of how we might
think about and address our own biases. These were some of the
sharp edges that allowed us to make genuine space for joy and
play. And play we did. We had a lot of fun. We had a beautiful art
exhibition curated by Laura, showcasing the issue not just
from a product or policy angle, but one of human experience.
This kind of third space offered people a chance to connect and
convert with one another outside of having to be productive in
one of the other spaces. We baked 500boob cupcakes in a
variety of skin tones to hand out to participants. We had a
product expo with 30 companies, large and small,
showcasing all their innovations. All surrounding a baby village
where people were hanging out with little ones. We had a library, we brought in
plants and couches to transform what was otherwise a sterile
tech space because we believe how
people feel in a space and in their bodies matters. All of
these opportunities for play and reflection allowed people to
open up credit creatively and be
generative of ideas and solutions. This silliness and
joy combined with our sharp edges made the weekend a great
success. Breast pumps are not the only piece of technology
that’s been neglected for women and people who breast-feed. The
world is quite literally designed for men. In her book Invisible women:
Data bias in a world designed for men, arguing that from root
to tip, systemically discriminating against women, leaving them
misunderstood, mistreated and misdiagnosed. Because the white
male body is a default body and female bodies of color are
atypical, products and drugs are often not tested on populations
beyond men. So, one such shocking case where a heart medication was released
that was meant to prevent heart attacks. But at a certain point in a
woman’s menstrual cycle, it was more likely to
trigger a heart attack. They didn’t test this at different
parts of the menstrual cycle because it was too complicated
and expensive. Are we willing to let women die
than do tests? Are women too complicated for
medicine? This leads to what’s next for us. We nurture the breast pump
community with follow-up events. We have one at the end of August
with black breast-feeding week, we are listening to feedback on the
second event to see where to do better. And scheming up new
ways to open up conversations about technology, policy and
equity at MIT. We’re planning another hackathon
because we can’t help ourselves. This is about menstrual equity,
it’s called “There will be blood.”
[ Applause ] So, on average people who
menstruate get their periods for 2500 days of
their lives. That’s almost seven years of
bleeding. Menstrual health is linked to
broader sexual and reproductive health.
While periods are half of the population, they are overlooked
as research. Menstruation research lags
behind compared with other life stages and experiences. Because
there’s little empirical data available, most educational
materials are incorrect or nonexistent. Many communities have low
menstrual understanding and increase the
stigma. Periods are political. Inequitable policies, the tampon
tax, place an unfair burden on
menstruating individuals. The tampon tax in the United States
and other places in the world refers to the fact that menstrual supplies
are subject to sales tax. They are in the majority of states in
the United States. Despite the fact that many things are
exempt. Including in some states collectible coins,
sporting events tickets and golf club memberships. So,
this tax places an undue burden on people who menstruate. And
for people in poverty, this adds up. The needs of enstraitedding
people are neglected in a wide range of
context in the United States, schools, work
places, homeless shelters and prisons. And beyond neglect, there’s
major injustices. In prisons, menstrual splice are sometimes withheld from
incarcerated people as messed up power dynamics. She couldn’t get menstrual
splice and made her own out of toilet
paper. In 2016 after release she
suffered toxic shock and had an emergency
hysterectomy. These are real issues. In the excellent book, periods
gone public: Making a stand for
menstrual equity, those with lack of four
and agency, they’re more at risk by their periods. So, at there will be blood in September, conveneing
technologists, policy makers, designers and activists around
this cause. We are hacking technology but also social
norms, educational materials. Bringing attention to inequities
faced by breast-feeding people and more.
I can’t wait. I’m excited about it. When I’m in the middle of
being depressed by the seemingly certain doom of a less than joyful future, I
will go back to the felts of this event and think about that
Sunday morning when I was convinceed that joyful
futures are possible. When people, not a loner in a
lab, get together on a Sunday because they believe in a better
world. Futures where you make 500 boob
cakes to make them smile. Where children learn and play
alongside adults. And have difficult conversations about
race and designing for equity because it matters. I want to
leave you with some questions to ask yourselves. Ones that I
continue to ask myself in my own work, whatever it might be. Who
gets to imagine and invent the future? Who is missing from
this imagining of the future? And whose voiced need to be
heard and centered to truly build our joyful futures? Thank
you. If you would like to get in touch with me, here are
all the ways. Thank you. [ Applause ] I think we have some time for
some questions. If somebody has a loud voice.
AUDIENCE: Sorry. The microphone wasn’t working. Are there any
questions? We have a microphone guy over there. And here are a few hands over
there. AUDIENCE: That was a really
wonderful talk. Thank you for that. I’m really interested to
hear maybe a little bit more about some of the structure or
like some of the inequity maybe you’re hoping to start
unpacking in the future hackathon.
ALEXIS: Some of the what?>>Maybe the structural inequity
you’re going to unpack, I’m interested in the there will be
blood hackathon. ALEXIS: We are early in the
process. But we are partnering with an advocacy group founded
by a couple lawyers, period equity. Our timing dovetails nicely with
theirs. Next year they’re going to launch a campaign to sue ten
states in the United States to repeal the tampon tax.
Excuse me. Our hackers will be creating
advocacy materials and things to go along with the campaign. What’s interesting about the
campaign, they’re working beyond issues beyond just the tampon tax, but
cementing menstrual equity as a concept will help them build
more things related to like accessing in schools and
homeless shelters. Issues in prisons I talked about. Kind of putting forward this
baseline legal language will help build on lots more things.
So, thanks.>>The next question over here,
the guy. AUDIENCE: Thank you so much for
this inspiring talk.>>Can you please stand up?
AUDIENCE: Yeah. Before on stage there was someone talking about
solutionism and the conflict between the underlying system
and the people who try to fix problems with technology or fix it with
single solutions and I’m wondering if you have thought
about this. What’s the answer between single solutions or tackling the whole
system thing? ALEXIS: Yeah. That’s a good
question. I think about it a lot. I think in our first — at
our first hackathon we always had this
idea that, like, you know, we knew that paid family leave was
a problem and that the breast pump — improving the breast
pump might be just like a band aid. But we still started there
and offered to people the chance — I think we mentioned at the
beginning of the first hackathon. Oh, if you would
like to hack policy, like, that’s totally something you can
do. Go for it. But we didn’t offer people any scaffolding
how to think about how to hack systems and think more
systemically. That’s one reason why we added
the policy summit and why we try to bring in conversations about the
structural issues. Even if people make things that don’t
directly address that. Having a community of people leave the
room with a higher understanding of these systemic and structural
problems is an asset. The community we’re building is an
output just as much as the things in the room.
>>Perhaps one more question? Are there any hands? Thank you very much, Alexis.
ALEXIS: Thank you.>>Thank you u. [ Applause ] [Waiting for translation] lation] .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . … lips [ Applause ]
[Waiting for translation] . .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . poo Ed.
. .
. .>>Hello? [speaking in German] — Christian Mio Loclair. ♫ ♫ [ding! ]>>Are you talking to me? You talking to me? You talking to me? Then who the hell else are you
talking — are you talking to me? Well, I’m the only one here.
Who the fuck do you think you’re talking to? Oh, yeah? Yeah?
Okay. CHRISTIAN: Now, what a moment.
We see the taxi driver empowered by the feel of a gun in his hand
and inspired by the idea to show a prostitute a better life.
Practicing in front of the mirror how it would be if he
would only have power. But this is just what happens
and it’s not really important because it would resonate with
no one. But what resonated with everyone is the underlying
poetic why. So, he is not a killer, he’s a killer in the
making. This is why he’s instable. And he uses the mirror,
actually, not to meet anyone. He meets himself. And then he
asks, who am I? And more importantly, who am I with this
gun in my hand? The same question that Narcis was not able to answer.
The story is lame, he was cursed by a god to only love himself
which he did in the first place and drowned
while trying to kiss himself in a pond of water. While that
story makes no sense, but there is, again, a poetic why. When he was born, they said this
boy might have a long life if he fails to
recognize himself. That is somehow great because it weaves
together our ability to survive with our ability to recognize
ourselves forever. And that is the reason why we share that story from generation to
generation, whispering, don’t look too close in the mirror or
you drown in yourself. That is the poetic why. Now, today I believe we
witnessed the most spectacular mirror of all times and that is
technology. And especially artificial intelligence. I do
not believe in the duality of man and machines. I believe that it’s artifacts
that directly we are given birth by us. They came from us and we can use
these artifacts today to look at them and investigate, well, who
are we? This is what’s interesting to us
at the interdisciplinary studio for new media where we
investigate how we can weave poetic content into new
technologies. Now, this is important in my opinion simply
because over the course of the last ten years we have witnessed
a storm of information flooding us every
day. But we have seen more but
remembered less. We have experienced more, but felt less.
This is because we simply are not able to process the information and
that flattened — and this is a problem no matter what you’re doing because
unsustainable communication leads to unsustainable business.
If no one listens to what you want to do, then you have a
problem if you’re an artist. But also, you will not get your
startup anywhere because no one will remember what was your
agenda. Now, for my origin, as I said
earlier, I’m originally a robot designer
and did this for a while. And I had no clue why I had to do this
for 20 years. I traveled all over the world and performed the
robot all over the world. But what’s stuck with me forever is
that whenever you do the robot, you actually are not studying
machines. Because if you move like a robot
— the only thing the whole time you think about is your heart beat and
that probably shows that you’re a human. You think about your
hands shaking and your eyes drying out so you have to blink.
While becoming a robot, the only thing that you have to worry
about is being a human. Now, on the other hand, I’m a computer scientist and my mother taught
me to code in 1992 when I was 12. She thought it could help
me to structure my thoughts. She was also a software
architect. And maybe she wanted to get me
out of hip hop. But what I directly did was the same
travesty. I thought, wouldn’t it be fun if there’s this dead
box and we could build something that comes out of that looks
alive like this Christmas tree right here. But, of course,
over the course of time this has evolved.
Everything you see today is only designed and created through
mathematics. Now, I did both as a child because I enjoyed it. But doing a piece, a theater
piece in Hanoi, Vietnam ten years ago I
founded my philosophy. And this goes like this. Imagine we as
humans, I believe we become more and more mechanical
and digital. And on the other hand we have machines and they
become and and more organic. Human-like. And if we push them
on top of each other like an artistic overlay, then we do not see those redundancy that
we just created. The things that we simulated. What we see
more clear are the things that stick out. The things that we
were able to not replace, to not copy. Those are the things that
make the hand unique from the other hand. And other things
that make the machine the machine and the humans the
humans. And this is interesting because I think it has always been just
about this. We have built digital machines
that become more and more human only to find out what at the end is left of
us. ♫ ♫ so, what is it that is left of
us? The current belief is that there are three things that make
the human the human. First of all, he knows about
himself. And because he knows about himself, he’s able to
create a decision from within. And sometimes this decision is surprising to us and that is
what we call a creative decision. Now, let’s slice those in pieces
and see if they truly exist. Now, the self is the first thing
we try to discover and try to investigate from the prospect of
artificial intelligence. And we stumbled into this field by an
algorithm called image to text. Now, this algorithm is able to
translate an image into a sentence. When I saw this algorithm, I
thought this was completely insane. No one was really
interested. But it’s amazing. Think of the ability that I see
something. Then I encode it into a
language. Make sound waves, send them, decode them and all
of a sudden you see it too. This could be called tailor
pate, we know nothing in the universe able to perform the
same task. Now machines are able to do so. This is
sensational. But as artists, our question is now that the
machine sees and understands what’s in front of us, what
could it look at, right? So, we showed this algorithm
art. Perhaps photography and asked what do you see? It sees a man riding a bike in
this photography and it makes sense. But as soon as it gets a
little bit more abstract, it kind of fails. It sees a
motorcycle. But what do we know what’s in this image? And this
one here is my favorite. It’s actually a performing
artist. It’s a stick a ball and a human,
it must be a baseball player. This was enough for a small
joke, but I don’t think enough for art because it doesn’t hit
anyone. So, we investigated a little deeper trying the
Rarschach test. We showed the machine this image. And if you think like this
machine that this is a closeup of a bird on a tree, then the possibility that
you’re psychology stick is 92 First.
It is a joke and doesn’t hurt anyone because it doesn’t answer
the corresponding question that is necessary. What do you always see the whole
time, but you have asbolutely no clue what it is a? And
definitely have no clue how to put it in just one sentence.
And I believe it is yourself. You continuously look at
yourself and you have no clue what it is. You have big
interest in yourself. But you can’t explain it. Now, we
thought, let’s build exactly this. That might have the
poetic power that we’re thinking of. So, we created a naked body of a
computer. A motherboard. And placed a little sensor on
top of it so that it has an eye to the world. And then it is
able to investigate what it is looking at. But unfortunately,
it’s standing right in front of a mirror. So, the only thing
that this machine could ever analyze is actually
its own existence. So, by the use of a simple mirror, we
basically asked the machine who do you think you are? On the
back of the screen we say — we see a little
monitor. And on this monitor we can see where this machine is
currently looking at. And what it thinks it is. Or who he
thinks he is. Because his name is Narcis.
Because why is consciousness so important? I believe it has
been the mother of all discoveries. About 5200 52000 years ago, we
came up with the idea, I exist. After this epiphany, now that I
know I exist, I know that I don’t like this. I have to
design my environment. But I can’t do it by myself so I have
to communicate with others. And I can’t communicate everything that works in my neural network
so I have to sing and dance and paint
on walls. And I do know that I exist, but I don’t know why.
So, we created another existence that we can ask, why do I exist?
Everything came from this one moment. And that’s why we thought, let’s rebuild this moment with a
machine. ♫ ♫ but what about choice? So,
basically that you have the choice is the reason that hopefully
someone pays you every month. Because if we already know what
you’re doing next, no one would pay you. But it’s also
principle of democracy. Because the only reason why we ask you
if your vote is because we don’t know before what you’re gonna
vote. If we would know this, no one would ask you. Now, you see
it’s fundamentally important. But it’s not clear if we
actually have a choice. For instance, pick a favorite
material from those. And imagine I would be able to data
mine the type of material that each of you picked. That would
mean that you don’t have a closed circle in which you
generate a will. It would mean that you are actually open and
readable. Just like a machine. Predictable. But I’m not able
to perform this task. But this can have two reasons. The
first reason is, well, you are a closed circle in which you
generate within yourself your own opinion.
Or, I’m stupid. I’m just a very bad observer. Maybe you lay
open the whole time what you’re gonna do next. I just can’t
read the code. Now, let’s take a look at our ability to predict
what’s gonna happen next. So, this right here is the sequence.
And you have to predict what happens next. And if you — this goes on
forever. Well, I can tell you, it’s the
opposite side. Now, uh-huh. This is what
happens next. Now, take a look at this sequence right here.
It’s a little bit more complicated. But I think you might already
get it. So, this is a circle. And it’s your task to see what’s
gonna happen next after the end. Well, and if you think that this
right here is the right one, then you absolutely are right.
It goes around 12 and then finishes right next to it. But let’s take a look at this
one. Basically the GIF it broken so it doesn’t really
work. We take away the circle. And it’s your task to see what
would happen next. And I think you’re not able to do those.
But the thing here that I want to communicate is, I wrote all
three codes. And they are equally long. You are very good
with left-right. And you’re very good with a circle. But as soon as we distort the
circle a little, you’re completely unable to predict
what happens next. So, what if you, as humans, are
just very distorted circles, but still machines? Let’s take a
look at this organism right here. Do you think there is some
conscious jelly in the microorganisms that makes a
conscious decision every day, a choice, that it won’t act
or go left, right, or just don’t walk today. And if they’re
complete robots, it’s just hidden to us. And what about
this? A little bit more complicated microorganism. Do
you think it’s predictable what he’s gonna do next? And that’s our new art piece is
about. We do believe it is completely predictable,
especially if he tries to optimize himself. He will create redundancy and
reduce the complexity of this behavior. And because of
this, he will be swallowed by the machine. So, first, he
tries to keep up with the machine and that’s how we
swallow him. And this is what our new project
is about. It’s called 5 Seconds. It’s a
mirror you see yourself. You can wave your hand. But it’s a digital mirror
enhanced with a neural network. It doesn’t show you waving your
hand. It shows you if you will wave your hand in 5 seconds. It
basically continuously predicts what you’re going to do. Now,
how is this possible? Everyone would say, now, well, I could do
the opposite. And that is the part where the art kicks in.
Because scientifically, yes. It’s a little difficult. But
the art comes with a judgment that we want to say. Because,
yes. You could always do something else. But I do
believe that you will get used to a future in which you
will continuously be predicted and you need two weeks. And you
will be okay with it. So, no, we can’t predict
anything that you could do. But you will not do anything.
You will do every day the same. Now, but what about creativity?
And creativity is very important to us because we actually are a
creative studio. We create new media content with
evolving technologies in order to communicate stories. But
therefore we, of course, are interested how those involving
technologies will affect our business. So, we thought, let’s
discuss the gun technology. It is a technology that is able to
generate fake Obama speeches. It is able to create fashion
that we never designed and come up with new solutions. And it’s able to think of horrid
horror dogs. But to understand the horror
dogs, you have to understand the underlying technology we use to
create them. It works like this with two
different artificial intelligences. The first one is
teacher and the second is student. Now, the teacher gets
to see data. Let’s say faces. And he then asks the student,
could you also create a face? This student then creates any
image completely randomly and the teacher says,
well, you should try this again because it looks completely
different. He tries a million times. And then nearly comes in
the right direction. And since the teacher then says, well,
that looks better, the student knows and learns in which
direction to go. And after a while, the teacher
is not able anymore to distinguish between its original
data and the one that the student created. He basically falls for fake
images. And that’s the moment when it
gets really interesting because you have to remember this part
right here. We now take away the teacher and
the data and ask the student, could you please think about
humans? And this is what he came up with. Now, he continuously thinks
about different humans. But none of them were ever alive.
None of those pictures were ever taken. had doesn’t go
through data because you have to remember the student
never had the data. He never saw the data. It was the
teacher that is gone now. What you see here is a system
that does know everyone. For instance, me, and it knows you
and all your ancestors and your children that you never gave
birth to. Why? Because it didn’t store data of
images. It learned what makes a human. Now, this is, of course,
very interesting. But why is no one doing 3D? It’s because they
can’t. Because one dimension more is one dimension more
complex. Actually, it’s 1,000 times more complex because there’s that
axis has 1,000 times the data of just a two dimensional
image. Now, until we found, actually, a little trick. We
found this two months ago. How we could compress three
dimensional data into this network and ask him for new
shapes. But we did so actually investigating the human body. We showed this artificial
intelligence different shapes of the human body and wanted to ask
him, could you also show us a new human body. And he then
said, well, it looks like this. And that didn’t really feel good
or encouraging. But please look closely how it looks
when the student thinks about 3D bodies. It looks suspicious,
doesn’t it? That was the first day. And I would like to show
you the second day. So, what do you think, remember
those faces morphing, how would it look? How would we call it if one body
shape is continuously transforming into another shape.
Well, we call it dance. And it’s performed by a student that
never saw a body. ♫ ♫ now that this AI thing
understands shapes and space, what could we show it to it? So, we thought maybe let’s take
worthy objects that are precious to us. For instance, our
cultural heritage. Could this machine store all the things
that we built, the artifacts? Now, remember, it doesn’t store
the artifacts, it stores the information of the nature, what humans have
createed. This is an early iteration. Probably two weeks
old. And it slowly starts to understand more and more of actually
everything we did between Rome, Greek, Syria
and stores this. And all of a sudden we have this artificial
intelligence crawling off the motherboard and into our world.
And it’s now there in space. But I do know as an artist that
contemporary dance and cultural heritage is not everyone’s
language. It’s a weird language that not everyone is willing to
speak. So, we thought, could be there
be an object that is worthy or has been worthy across the last
hundred years and we could redesign it 30 times a
second? We call can the machine . it the machine. ♫ ♫ now, we have over the course of
the last year we have completely transformed all our pipelines
into artificial intelligence and everything went quite flawless. We built sneakers, we built new
prototypes and endless material of 2D images. I’m not sure if
that’s what makes the machine now creative. But I’m also not sure if
creative is a future business at all. And then all of a sudden
there is nothing left. Who cares that we are able to think about ourselves if the choices
that we make are predictable and doesn’t seem to be creative at
all? Now, this is the lowest part of depression because I
believe we found by accident the solution for this. Now, we thought, couldn’t it be
that we teach a machine not
necessarily how adults create stuff, but how children create
imagery? That would mean that we could have an AI access to completely
forgotten creativity. We would see totally surreal
images built by, well, the understanding of only how kids
view the world. So, we scraped as
many images from the Internet of children paintings and then
taught this to the machine. And once we had trained our machine, we showed it this architecture
here and asked it, could you please redraw it in a children’s
style. And this is how it looks. Knew, an artificial intelligence
is only implemented for one reason, to
make a decision. What color did this come up with? Every
decision. It just refuses to make a decision. And how about
this landscape? It completely breaks. And this cup of tea or
coffee. It just doesn’t work. And this is fundamentally
interesting because it says something about us. We might be not able to
outperform humans. We outperform certain tendencies
of specific humans. Well, how is it possible that we
are able to draw Picasso’s style 60
times a second, right? But not this artist. Well, let’s take a
look at his data. What is it actually — how do
they space their paint paintings
while in every way? What is their favorite color? Every
color. And what is their motif?
Everything. Well, an AI detects patterns if
there are patterns. And that’s why it fails with this. And
what does it mean about the future mirror? Well, Charlie Chaplin already
knew this almost a hundred years ago. We have to enrich our
behavior and then the machine will break. We ask the machine,
what is this person gonna do next and it will say, well,
everything. Now, and what about this machine
that now is able to speak about itself? I would say, who cares? We
didn’t build a machine for two years with a webcam and a
motherboard to place it into a crematorium, the
biggest in the world and then call the
violin player to find out how the machine feels. That would
be too pathetic. Yes, it is a machine looking in the mirror,
but this mirror is made for you. But whenever we look into the
mirror, it is quite often that we just don’t like what we see.
Well, I do believe that today we see that this artificial
intelligence and this robot that we are so, frightened by is
actually us. But we could use this technology
to understand this and improve. And that is why I’m not scared
of machines becoming more and more organic. I’m deeply
worried about humans becoming more and more mechanic. Thank
you very much. [ Applause ]
>>Thank you very much. [Speaking in German] .
. .
.>>Greetings. I’m going to talk to you today
about a global network of volunteers
which I am a part of. I think it’s particularly
interesting in this meeting because we’re actually doing
this. I’ve heard a lot of presentations. Some very
interesting presentations. About the current state of
affairs, about the future. I want to talk to you about the
present and the near future and something which is working quite
well. Although we don’t know how it’s working well. And so, I think there’s an
opportunity here. To make a long story short, this
is what we do. This is Shay. She was 6 years old at the time. She’s probably about 13 now. And if you look carefully,
you’ll see that her right hand is missing fingers. She was
born that way. And she’s getting a plastic
3D-printed mechanical hand from a
volunteer. And she bends her wrist and it
makes a fist. And she smiles. And that’s what we do. We make
children smile. We make parents weep. And we make nerds
rejoice. That’s our whole story. I’m finished,
thank you. [ Applause ]
Okay. There’s more. This started for me about six
years ago when I saw a video on
YouTube about a South African carpenter who had lost the fingers in a shop accident. And he found a puppet maker from
Washington state. And together they had figured out how to make
mechanical fingers. And eventually figured out how
to make mechanical thingers we a 3D printer. And they mentioned
that they were giving these devices away for free. So,
here’s the puppet maker. And that’s a device that he had
made. And these two collaborated and they figured
out thousand make a mechanical hand. This video explained that they’d
realized that the hand they were making could have useful not just for
carpenters and adults, but also for many children who were born
missing fingers or hands. And they also mentioned that they
were putting the design online. For free. And I had an idea. I
was supposed to be preparing a class. But instead I was
watching YouTube videos and I put a comment on the YouTube
video. Because I noticed that people were saying, this is
really good. I would do this. So, I gave them a way to do it. I made a Google Map and I told
people that if they had a 3D printer
and they wanted to help, they could put a
red pin on the map. And if they knew someone who needed a hand, they could put a blue in
on the map. And it was an experiment. To my surprise,
that might there were 7 pins on the map. And within six weeks there were
70 pins on the map. And people started calling me. Saying,
okay, now what do we do? And I didn’t know. So, I created a Google +
community. And as you know, a month ago
Google + went away. But we had 10,000 people who had registered
for that community. And we produced many designs and
we are now all over the world. So, I want to tell you a little
bit more about Enable and what we do. And then talk a little bit how
it works. And then, hopefully, get to talk with you afterwards
about how the kind of thing we’re doing might
be useful for things that go beyond
3D printing and prosthetics. Because I think we may be on to
something really valuable. So, first, let me say that these devices are laughed at by
professional prosthetickists. A professional looks at this and
says it’s plastic, it’s brightly colored. It’s made of plastic,
it’s gonna break. It looks like a toy. A kid looks at one of
these and he says, it’s plastic, it’s
brightly colored! It looks like a toy! And they like them and wear
them. Where in fact children don’t like wearing medical-grade
prosthetics because they’re heavy and they can’t take them
into the wool. And they can’t get them dirty. And besides, they will out/x04
they will outgrow them and they will cost 5-$10,000. We are giving ours away for
free. We can do that because they are
made with consumer-grade 3D printers
and made by volunteers. And frankly, they’re not up to
medical standards. They’re up to
child standards. And they’re up to a standard that I want to
recommend to you. My standard is, is it substantially better
than nothing? This is not German engineering.
This is the idea that in a world in which many people have
nothing, if you have figured out how to make something that is
substantially better than nothing, it is immoral to keep
it back even if it’s not up to your high professional
standards. And if you make it available to volunteers, and you give it away
for free, you can do things that
businesses and engineering schools and medical prosthetic companies can’t do or
don’t do. So, part of our success is that they’re just good enough and
they are kid-friendly. But the other part of our success comes
from this kid who I like to call our director of marketing. I
call him that because he got one of our early hands. It happened
to be colored orange and yellow. And while he may have been born
without fingers, you can see that he was born with one of
the great smile, right? And the picture was taken of him saying, look, I got an Iron Man
hand! And FOX News picked that up. And within weeks the Enable
community was making Iron Man hands and
Wolverine hands and Captain America hands and the whole
thing got a little bit out of hand. But for a kid this turns
out to be more than just a convenient metaphor. Remember,
every super hero is born with some kind of a flaw. And then
through some magic of technology or something they get the ability to do things that they
couldn’t do before. And so, for these kids, it turns
out to be not a metaphor. It turns out to be a good way of
understanding what’s going on. Now, that was basically four and
five years ago. And you can see that the original device from the video has
evolveed to devices that are much more natural-looking. And has evolved in two
directions to devices that are much more
complex and devices that are much simpler. This device has
one moving part. I’m trying to push my community, which I
cannot control, but I can encourage, to go for the
simplest thing that will be useful. Because that will ultimately
reach the most people possible. And you can see that there’s
been real progress even though our global community of volunteers has no
director, no business model. And it’s all free and open
source. I want to say a little bit more
about the devices and the meaning of the devices. This became clear when this kid
was interviewed. Another secret of our success is that this
turns out to be a story that news media love to tell. You’ve
got a smiling kid. You’ve got these crazy people on the
Internet. You’ve got this mechanical hand. And the hand is spectacularly
cheap. It turns out to this kid, none of that mattered. And I know that because this kid
was interviewed six weeks before he got the hand. So, it’s
really not about the hand. He said, you know, I was born
with a funny hand. I was born without fingers. And I have bad
dreams. And in these dreams there are
monsters who are coming after me. And now I just turn to them and
I say, you don’t scare me because I have two hands. This
was before he got the device. We’ve come to understand that
with kids as well as with devices,
these psychosocial aspects of this device are even more
important than what you can do with it. Or to put it another way, they
— what you can do with it is you can
feel good about interacting with other people on your own terms with your super
hero hand. So, I was in South America six
months ago. And I met these two young men
who had both gotten arms from an Enable
chapter in Honduras. Both of them were instantly
unemployed when they lost their arms. Couldn’t get work. Turns
out both of them got arms and both of them actually started
their own business. This fellow sells house plants. This fellow sells sandals in his
own store. They were both electricians before they had
their accidents. And I asked both of them,
this was on the same day. So, it finally got through, I said,
how often do you wear this device? Because prosthesis are not worn
that much of the time even if they’re 10 and $20,000 medical-grade
prosthesis. These guys said, well, we wear them every day.
And I asked him in particular, well, what’s the important thing
you do with your hand. And he looked at me like I was an idiot
and he said this is the most important thing I can do with my
hand. I can hold my daughter’s hand and I can go for a walk.
This guy, while he’s demonstrating the sandals, I
asked him the same question, he said, well, now when I’m out with my buddies, I can do a
fist bump. I was in Thailand just a month
ago. I’ll be going back in three
days. And in Thailand, as you know,
everyone does this about 30 times a day.
So, only when I got there that I realizeed that our hands can’t
do that. The mechanics are such that you can do this. So, we’re now trying to make a
hand that will flatten out for this purpose because we are
still learning that the psychological
and the social meaning of these devices. Both the fact that
they give them a role, they change the conversation. And
they were made by caring people through this mysterious magic of
the Internet. That’s something that I think we can all learn
from. Makes it really quite important. So, Enable now is in 80
countries around the world. I’m gonna be visiting a group in
Munich in a couple of days. But they’re everywhere. These are
all chapters on Facebook. But what I want to call to your attention is that each of these
chapters is its own invention using our
open source designs and using our
processes. But each of them is on their
own. There is as little central organization as we can possibly
have. And yet it seems to be working. And so, I want to talk
a little bit about how it’s working and whether other things
could work this way as well. Because it’s an unanswered
question, but I think it’s a potentially important
question. On that first day when it was just an experiment and I didn’t really
realize that I was, well, I say it’s sort of like getting
pregnant. You indulge yourself for 20 minutes and the next
thing you know, you’re responsible for all of these
children. So, for 20 minutes made this map
and it took over my entire life.
But when I made up the map, I also made up a slogan. And I’m
very proud of this slogan. It was my biggest and first
contribution. I said Enable is a global
volunteer assistive technology network built on an
infrastructure of electronic communications, 3D printing, and
goodwill. Now I would say not just 3D printing. But this is a really important
recipe. It’s a recipe, but it doesn’t
actually explain what the cake is like when you bake it. So, I want to tell you a little
bit about what the cake is like. How does Enable actually work.
There’s a book called the starfish versus the spider. And
it points out that, you know, animals that walk around on land
come with two basic architectures. A spider, like us, like a human,
has a central nervous system. If the central nervous system
goes, the whole system breaks down. A starfish and an Internet, if
it doesn’t become too centralized, is not like that at
all. It’s got a whole bunch of parts. They interact — some
parts interact with other parts. But there’s no central control. Therefore, it is very robust and
while it may not be able to go through
as much planful behavior, it may be able to survive and adapt in
a lot of different environments. Now, this is actually the way
Enable is currently organized. And frankly, every one of these
bubbles is at a different URL a different website or a different
server. This is an ecosystem. It’s not an organization. You
can go to Enable.org, but probably you’ll end up enabling
the future and that will then send you to our new home on Wikifactory
because we emigrated from Google + just in time and we’ve now set up shop again on
Wikifactory and we’re actually enjoying it quite a bit. Thank
you, Kristina and company. So, it is a whole ecosystem of websites and volunteers and
somehow it works. Now, there are probably people
in the room who understand this slide which I made better than I
do. But here’s what I’m trying to understand. And if grow
understand it better, this is why I’m here. I want you to talk to me. It
seems to me the challenge that we have as humans in
civilization is how do you take an idea from a few people to
many people? We all know how command and
control organizations grow. Right? There’s some sort of a
visionary, an evangelist and they start a company and they
find investors and then they hire employees and then they
have managers and then they have a commander
and they develop policies and laws and they can hire people
and they can fire people. And they can put people in jail if
they break their contract. And that’s the way the world works.
That’s the way a lot of the world
works. Except when it doesn’t. And it doesn’t work a lot. In fact, these days I think
we’re finding that the planet’s biggest problems are side effects of
command and control organizations that are not
really paying attention to the things they can’t do well. Enable is different. And I can
tell you with some confidence that we’re different because we
encourage groups and individuals to figure things out by
themselves and do it their way as seems fit to
them. We do have a shared vision and
shared values and shared goals. And we were lucky because a
smiling child with a 3D-printed
prosthetic somehow tells that story, even without words.
People see that. It gets picked up by the media. They’ll say I want to do that
too. And because everything we do is open source, and because
we celebrate people telling their stories to each other, we have shared practices which
bring us together somewhat the way the rituals and religion
bring people together. And then we have, increasingly,
these optional utilities and services. No one needs to do any
particular thing. But we build websites and web
tools that make it easy for people to
do what we think might be useful for
them to do. You know the expression, it’s
difficult to herd cats? People talk about this in software
development and engineering all the time. Well, my argument is that it’s
not actually that difficult to herd cats if you get the
structure and environment in which the cats will go where
it’s easy for the cats to go. In this kind of an environment,
all this guy needs to do is jump up and down and the cats will
end up in the room that you want. And so, you see all of
these different websites and tools that we have are really
there for the purpose of making it easy for people to learn
about Enable, to volunteer to give
devices to people who want them, or to request devices from volunteers who are
eager to provide them who want to
participate in what is called the planning. But
really just involves the discussion and the raising of
issues that the community needs to
think about to make it easier for people to
navigate this complicated ecosystem. This is our current
challenge. And to make it possible for
people to vote on how to use a certain amount of money which we
now have in something called the Enable Fund. So, we have some
simple governance. No one needs to use it and no one is bound by
it. But if you have a good project, I encourage you to
think about writing a proposal. Joining this website where you
can make the proposal. You’ll get feedback on it. Other people in the community,
that could include you, will vote the proposal up or down. And if 80% agree that it’s a
proposal worth funding, and at least 15 people vote on it, it will be approved
and we’ll dispense the money. That’s as close as we have to
governance as possible. Most enablers don’t use this
mechanism. But it does exist. And it does give us at least a mechanism for
accepting tax deductible donations. And helping groups develop new
devices or new initiatives that will be
useful. So, that’s the process that I
just talked about. So, we have chapters, we have teams, we have
individuals. They write proposals. They are reviewed
by who needs to approve them. If they’re reviewed, they go to
the Enable Fund, reimbursements
happen and it is all documented on a website called open
collective.com which is itself an open source platform to facilitate this kind
of thing. So, I want to say a little bit
about what motivates us and what could motivate more people to do
more things of this sort. We — there’s a
whole profession called economics which is all about why
and how command and control organizations get people to do
what they should do. We don’t really have a good theory for
why people should do what we do. But I think people should do
what we do. And so, here are the — some beginnings of an
idea. You’re probably familiar with Maslow’s
hierarchy of needs. The basic idea is that we all have needs.
We take care of the critical ones first. When we’ve got our safety and
our physiological needs met, we start worrying about love and
belonging and then about esteem and then about our purpose in
the world. We all have problems. We all struggle with
them. But there are other people who,
unlike us, who have the luxury of being
nervous about this end of the spectrum. There are other
people who are at that end of the spectrum and there’s
a really happy partnership between these two groups. Which
is to say, what we do is not charity. It is a partnership.
It’s not a sharing economy, although everything we do is
shared. It’s a caring economy. And that’s a little bit
different. My claim is that governmental
organizations, non-governmental organizations and businesses are
great. But there are gaps. And people still fall through
the gaps. And it’s not a pretty picture. We don’t take care of them
adequately. I think that Enable may be a
prototype of a new form of organization
which not only is creating a safety net, but is also creating
a pathway that allow people who are currently throw
throwaways to become important parts of a better human society.
To fill those gaps. And to help us do much better
than current institutions can do. As I look into the question of
what other organizations are organized this way and why does
what we do work and could we do more of it? I’ve
begun to think that it boils down to belongingness, purpose and
efficacy as facilitated by Internet communications. And what other — other
technical tools are available to us. But I recently realized that
that’s also true of alcoholics anonymous, which is a great
organization. Which has the same sort of distributed
organization. And it’s also about the
Christchurch shooters and other online terrorist networks. They
also have this distributed values and idea-based form of
organization. So, I think that the personal
connection in which a volunteer interacts with a child or with a
partner in making the world a better
place is a really important part of this. We gave this Incredible Hulk arm
to this kid just ten days ago. And he asked me the question,
which is the question I asked you. He said two things to me
which are really quite fun. And this is why we do it. The first
thing he said was, I’m never taking this off. Which, of
course, is what you want to see. As I say — what you want to
hear. This is not charity. This is the most rewarding work
most of us have ever done. And encourage some of you to try it
out. I think you’ll get a lot from
it. He said, he asked me twice, he said, are you from the
future? And I will suggest to you that
that is indeed the question. Because I think we could do much
more of this. So, thank you very much.
[ Applause ] I’m going to be here tonight and
tomorrow. I look forward to hearing from you.>>Thank you very much. [speaking in German] .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . … Frank Reiger lips .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .>>So welcome back here on stage
one. The next session will be in German. If you need the
translation, get your headphones over there and say hello to the
translators over there. Please give a warm applause to
Frank Reiger. [ Applause ] hi
Hi, Frank. FRANK: Hey, nice afternoon. We want to talk about disruption
today. Especially the disruption of our reality. The destruction of what we deem
to be true. The truth, our surroundings, our
environment. And we have to go back to the primordial soup.
Especially post-modernism. All of us. I don’t see anyone here
who is significantly older than me. We’ve all learned to distrust
our own reality all the time. Everything we see. Everything we seek to perceive
we just put it — chop it up into small pieces, try to
analyze the power structures in the background. What’s the
narrative that’s being fed to you? Is there something that’s
intended to come across towards you? Are those narratives all power
structures? Are those legends that we see as structures, are they changeable? This is how we view or how we
perceive our reality. That we see it as fragmented . Nothing is something that you
can’t question. In the end it’s all like IT
security. You have to start looking at it a
bit, prod at it. And it soon starts to fall apart. The problem with that is, the
basic philosophical direction of the
post modernism was based on the hope that in the end something
constructive will happen. That something new will emerge.
That there’s a new perspective on reality that maybe can be shared
with others and potentially give you
a direction that people can rally behind. Or that don’t fall apart like
the structures of modernity. Unfortunately, that didn’t
happen. And we are now in a situation where we’re de-question things
in the sense that it’s a destructive form of questioning. So, as an adult you learn to
take shortcuts and you already know that this is not leading
anywhere so you’re just saying, okay, I’m not gonna go there.
But in the end the doubts are still there. There are no fixed points in
your life, no beacons that you can hold on to, no religion. There’s just a stream of bits of reality that more or less is
reliable or not. That you
understand or that you can see through better or maybe not so
much. So, as society, we already have a problem of
orientation and we don’t know where we’re going. There’s this business field of
disruption which was a buzz word for the last few years. And in this situation we come upon entities and blocks
that have underscore this predicament of the rest of
society. And they have found a way to use post-modernism as a weapon. So, you can say about the
Russians all you want, but they always have
that very nice trait that they will always
tell you what they are doing. Absolutely frank with you.
There is no holding back. So, there’s a lot of these media entityies that I just to want
use as an example. They’re not actually responsible
or it’s not their sole fault. They’re just — well, maybe a
little bit better than the others that utilizeing situations both real
and imagined. And this media empire that they
have been building over the last few
years actually has a purpose. You know? It’s not so much about pushing this candidate or the
other candidate, promoting one over the other. What they’re really about is to take the inherent problems
of our society such as climate change, migration, refugees.
All these problems, all these topics that a society is kind of
dealing and coping with. And where we don’t always have good
solutions and find good solutions to heighten them. To — so, when we look at
program, Russia today or Red Fish media
or Sputnik, for example, you can
see an interesting accumulation of
focus not necessarily a focus on one’s subject. They take every subject that in
some sort of way has potential for a conflict which can cause a separation, a
rift so that it can cause and then they heighten that
feeling. This potential of a rift within societies. For
example, Red Fish is this branch of this media empire that
targets left alternative movements and put them — give
them a platform and open a window for them in their
media universe. It is their example for Reuters,
Ruply. And they put the camera where something is going on. They don’t really care where
that is. The background to that is these structures of these
organizations have ideals. They have — they have examples, they
have role models. And these role models are kind
of — were there a few years ago. It’s like western propaganda
broadcasting stations that were feeding information to the east with
what was going on in the western world which was not reported on. Always with a little focus on
the things that were objectively bad. Sometimes they even made
stuff up. Sometimes there was fake news. But at the core it was about
showing — like playing cool music and giving the people that
are against the system, giving them a voice and a platform.
And from the perspective of the west to the east, that was
completely justifiable. This was a legitimate way of dealing with their system and giving
people in the east who are in
accounting to the state, we are provided to the west with a
window into the media public that they don’t get at home. So, they have this option of
being enlightened in that sense. And the Russians were like,
yeah. That is a great idea. We can do that as well. And so, they took that game and
transformed it into the 21st century. And just out of this
perspective, it’s possible to understand what they’re actually
doing. And when you look at the history
of Russia today and stuff like
that, they have for a long time were rather
neutral. Just neutral news coverage about
conflicts, problems. And then step-by-step they
slowly started doing disinformation,
lying and sometimes they still don’t do this. They do the — their shtick
where they cover what the others cover as well, just from a
different angle. And the question, do you want to
morally evaluate that? Or condemn them for that? And
you kind of go, okay, western media, great. Eastern media,
horrible. But from the perspective of the
Russians, they’re just playing a little game with us. And a game
that we used to play with them. And that’s what they say. They
say that clearly and they state that. There’s a very important player
in this game, Vladislav Sorkov. That
man is in different positions in the Russian government. He’s a
consultant to Putin. Sometimes he disappears into a ministry
until he reappears. And then there’s always this
kind of — sometimes he falls out of favor and then he’s back at Putin’s side
and he’s a super-interesting figure. He’s part of the
avant-garde theater. And he’s also a quite good
author. And he has this interesting
attitude. Or a character trait that he every once in a while he
will just write about what he does and what actually is
happening. So, this quote that you can see
is from an article that was published about a month ago with the title
“What’s really going on?” . And if we consider the Russians
have a tensey to say what they’re —
what they’re doing, you just have to listen to them
carefully. That’s quite a statement that he’s doing here.
It’s the statement that we have our own ideology. In this case,
the Putinism, he calls it that. That differs from the western
view of society. We’re not pretending that people do have a
choice, that they have freedom for self-fulfillment. There’s a
goal. There’s a direction. And there’s a leader. And that
is what society is focused on. And everybody who is in the way
of that are in the way and have to see how they can deal with
being in the way. And what he amongst other things did when Medvedev was in charge,
when Putin was not leading Russia, he
did a few very interesting things. He all sorts of — he would —
he would advocate for a bunch of different groups like Neo Nazis, liberal
democrats, climate change advocates and nationalists. So, like all sorts of different
groups. Youth groups. And punk bands. And then slowly, little by
little, he was telling them, the public, about what he did
and what the whole network of middlemen
and that in the end it was promoted bit Russian state
itself. And the effect was that, well, I
was actually doing political work here. And I was just a marionette or a
sock puppet for someone else. So, what he managed to do was to
undermine the democratic impulse of the Russian state itself. Or the Russian — the Russian
nation. Because what he did was he
showed them that their perspective on reality was so
flawed that it was basically all wrong and he proved them that it was
— how easily manipulateed that is. So, this was a fundamentally new
thing. It was — I wouldn’t want to
call it innovation in this field of political communication. The people who are working in
that kind of area realize that, of course. And started imitating it. And
if you now look at the Trump campaign with the total — the
total ignorance of facts and denial of
what was said yesterday. That the people who advised him,
we can be sure that they knew what they were doing. And they knew — or they looked
at Surkov and what he did. And it’s not a question of
communicating positively or negatively. Rather, creating a
piece of the population for which the meaning
of truth and lice doesn’t matter anymore. Or facts and reality just become
meaningless. So, in the west there’s
perception — in the west of the Russian
propaganda media. Where they’re being perceived as
an instrument of the enemy and leads to remarkable effects like at the
Munich security conference. She was indicating that she was
considering the Fridays for Future movement an act of hybrid
warfare. So, she saw that protest of kids
and her first impulse was to say the first people who gave them air time,
who reported on this movement was
Ruply, which is Russian today. So, because they basically have
a pretty keen vision for where things are happening. Where there’s new movements and
where there’s something going on that could be interesting in
the future. And, of course, the kids don’t
know them better — there was a TV camera to just look at it and
explain what they’re doing. And explain why they’re going — they’re flooding the streets. And now you have that effect
that when the chancellor looked at that, the first thing she
saw, it was first reported by the Russians. Or Russia Today
and therefore it was an act of the Russians. We had that line
of thinking in the past already. If you look at the GDR,
basically the Stazi, the state security, anything that works against them was
directed by foreign agents or foreign nations. That was always the default
assumption. But what actually happened was
most of the time was that they actually
manageed to if there were movements, they
weren’t involved and didn’t care about. They basically gave it a sense
of — they give him a smell of being
controlled by by foreign agents. And just by giving them air
time. So, he can discredit them in a way. Well, if that really happens, if
automatically who gives it air time first indicates — or gives it
an impression of who might be behind it, this is problematic
for our society. This is gonna give us a big issue. So, if you now look at Britain,
this is just a field where not just the Russians are playing. This is a battle royale of all
the people that were involved in the interior politics that were
manipulateing the national perception of the debate. And the result was that there’s
a society now that cannot communicate amongst itself
anymore. They’re not in a position to
figure out what they want. It’s not just two camps within
that society that there’s —
obviously there’s the leave camp and the remain camp. And they’re both fragmented.
Especially the remain. One part of it wants like being — have a
similar status. Some part of it wants to stay in the EU, some
other parts want some other things. But everybody is
convinced that their way is the only way and the
right way. And that anybody else who has different views will lead to a
collapse of the nation. If you assume that kind of base, then there is no debate anymore to be
had. They have forgotten how to —
that those facts that they’re talking about might even be lies or just
fabrications. For example, the bus, the red
bus that went to London saying you — we sent that much money
to the EU every month. And basically everybody knows now
that all of this was fabrication. It was just
complete garbage. And there’s this nice Brexit
video where it showed that mechanism
of this targeting of emotions and
creating fear and doubt and it really shows how to do that. It was by Benedict Cumberbatch. If you go down into the swamp
there’s that point where you’re asking yourself, how did we even get here? How — how did this actually unquestioned — or like there’s
a political, scientific debate
happening around vaccines. We were at a point where this was
actually really clear. There wasn’t
much debate happening, it wasn’t questioned anymore. And how do
we get to the point now where we’re not sure anymore which reality we’re even allowed to
accept? We basically have to question everything that we’re being fed
and this is — there is a point
where people can’t even take the step back anymore and say, well,
maybe science is right. This is not something they can do
anymore. And obviously it depends on how
much fear is being sown. But the mechanisms that lead to
this moving away — so far away from
accepting certain truths that is social media and what social media —
bottom line social media is a platform of manipulation.
Facebook was created, YouTube, they were not made to make our
communication better. They are there to click more
advertisement. They’re there to manipulate. The level of
manipulation is being sold us to at the purpose, nothing but
that. And all that people did was
using these mechanisms. And then
there was this other factor of acceleration. This enhancement that these
commercial social media platforms live based on the fact
that people stay on these platforms for potentially as
long as you can. So, they click more. So, then you can generate
more data and then you can better
understand the user behavior on these sites and you can sell
more ads. So, they measure what posts,
what videos cause people to share them, to engage with the
platform longer, to stay on the platform longer. And what is
similar to that. And then we have these funny effects that videos on YouTube that much clicked by people who
have a lot of time. That have a lot of free time during their
day. And that then continues to click
on the same keywords and the same paths. They cause the algorithms to
understand these paths better, learn these paths. So, for
example, if you were answering moon landing for — because
you’re interested for the Apollo 13 mission, three plays later
you would go to Nazis on the moon
and the moon landing never happened. They have gotten a
little better at that on YouTube. They’ve done a little
bit. But these mechanisms that lead to emotionalized messages. Fear, hate. Are being
preferred. Are favored. Leads to especially political
news who are not positive, that are not
peaceful. That are not unifying are being heightened
and work better on these platforms. Automatically they’re being
preferred and favored and they will be shown a lot more. The logical consequence of this
is that people live in a complete conflict, an absolute
conflict with the physical reality. That do not have
anything to do with that anymore. And they don’t want
to, maybe. And if you look around how pop
lair popular these flat Earth memes
are, you have to kind of remember and consider that some of them are a game.
They’re so far out there that you’re like, yeah, yeah,
whatever. Funny, funny, ha, ha. Let’s look at the videos that
they did. And for 98% of the people, these
videos, that’s how they would work. They understand the humor
and irony in this. But the mechanisms that then go into
place that the people that share these videos more often, clicken these
videos more often. There’s 2% that go, oh, this
sounds rather logical. And the only question is it
turtles all the way down or is there a
layer of elephants that keep that thing up? And you can find
those discussion it is you click around and seven around, you
will find these discussions and debates. And Anton Wills wrote this play, Reality is what You can Get Away
With. And we have this problem that we kind of live by this
rule. We have no reliable resources and reality anymore. We try and we grasp and grapple
or we give up. And these mechanisms of
manipulation that we are being affected by are quite
impressive. There’s these aspects they kind of want to go
into further. One of them is speed. You can see what you
see here is the curve. Is the curve of a search interest after
pizzagate. Pizzagate was something during the American election, a completely
meat up made up narrative about the — there was a child abuser amongst Hillary’s
people and that was concentrated around
a pizza place that would be like the center for a child
prostitution. It was completely made up. Completely. There was no, no aspect of that
that was true. There was no relation to
truthful things. It went through the roof. There were so
many people that believed it. There was an actual attack on
the pizza place. And this kind of distortion of
reality causes us to believe histories because we want to
believe in them even though we don’t see any factual reason
to believe in them. We’re not capable any longer if
any of what we’re reading is based in facts. Because the west best that we
have as facts is Wikipedia, to be honest. Like German Wikipedia, at least,
is something that when it comes to
fact’s completeness is a bit
problematic. When it comes to a lot of other aspects, especially
in the English Wikipedia, there’s a bunch of questionable
things in there. But that’s something that we live.
we’ve decided to be okay with that. Because most of us who have used
it, sure, that you can interpret the P value correctly in an
academic paper. Yes, exactly. So, we have — we’ve lost the capabilityies to deal with
reality because the reality has become quite complex. And so, we but then we don’t
rely anymore on experts because experts are biased, if in doubt. And then flexibility is within
the tax and political sphere is something that’s quite
impressive as well. While in Germany there’s still the discussion on social bots or
paid posts on Facebook and such things like that. This discussion that was — had
to be led — should have been led in 2016 during the American
election for the President. There’s other people who were
playing this game. They’re a lot further advanced now.
They’re not really sticking around Facebook and Twitter and
social bots anymore. They’re on WhatsApp. They’re on Telegram,
they’re on Instagram. And they will find new ways of mass communication that is targeted
that causes people to not do
something. When you look at the election of 2016 in the US, the attempts of manipulation
that went on were potentially not even
completely deciding factors in the fact that Trump won in the
end. There’s things that say that that’s what caused it. But
then there’s also an argument to be made that it would have
gotten into office anyways. All elections that we are seeing
right now are super-close. They’re super, super close.
Everywhere we have these decisions that are kind of
decisions of caps. You have them to the left or to the
right. And there’s a tendency to one or the other. It’s like
literally a couple of thousand people that are deciding the
vote. So, manipulation is
super-attractive. I don’t need to manipulate a lot. I just
need to manipulate a little. And that causes secondly that
manipulation attempts that are going on, or that went on during
the American election were not targeted to
get potential Clinton voters to become Trump
voters. They were only targeted to get
potential Clinton voters to not go to vote because that was enough in
this election. So, cause confusion, cause insecurity.
Spread a lot of histories, narratives. Where you’re not really sure and
it’s somewhat plausible. And this structure of causing instability to even stop
people to go voting like by breaking
electoral machines. And it’s really hard to understand this kind of disruption and it’s
not just a pure political communication that’s being
effective. Affected now. It’s a bit on the dark side of
the power. We have this discussion in Germany. I don’t
know where it comes. It’s about social bots. Someone needs to have played
with this concept for a bit too long. And, of course, that
works on Twitter quite well if you do that. But
the reality is something completely different. And a lot more scary, to be
honest. There is a large research shows
and proves that stuff that’s categorizeed as social bots are
actually humans. In this case, there was a test.
They check the out, how good are the social bot testings. They
tried to find out are the mechanisms of detecting whether
or not something is a bot or real, are they actually working
quite well? And if we look at what happens
in Germany during an election, we can see that specific parties not only use digital
enhancement, but they also have a lot of people that have a lot
of free time on their hands. And they just tweet 200 times
per day a lot of memes that they get out of a central database.
And sometimes they make spelling mistakes. And sometimes they
vary in their posting frequency. And then sometimes they’re in
different locations. They’re not bots. They’re just humans who have
quite a bit of energy and that’s where they put the energy. And putting out hate speech and
if we look what RFD is doing, for
example, our right wing party here in Germany, there was about 80% of all electoral — I’m actually
finding this rather disturbing because
there’s a script maybe that handles 5,000 accounts. That’s
just a kid. But if there’s actually really
5,000 people that actually go out and Tweet, that’s even more
scary. There’s another aspect that
interestingly enough, we haven’t seen that in Germany in the
election campaigns so much which is called compramate. It’s publication of information
about politicians that is compromising in their nature or
puts them in a bad spotlight . That was — in the past that
was the demain of Yellow press,
tabloids, National Enquirer in the USA a paper that’s renowned
for this kind of thing. There’s — they all have their — their collection of things that
they can use their kompromat collection,
I want to call it. They basically dominated that kind of
business for a while. But this is not true anymore. Right now leaks of data and
hacking of data has become so common place that the — that we deal with intrusion of cyber into the
— into the political space. There’s the Macron campaign in
France was the target of cyber attacks
by groups. And in German there was — was
in a supposed sole perpetrateor
collected a lot of information about politicians, took over accounts, extracted
data. Sometimes very unpleasant data
in where as a conservative politician you should think about it if you
maybe not show your sexual preference at your — that is
maybe not — may not be as aligned with your party’s
ideals. Maybe you shouldn’t publish that on to Facebook.
Just a thought. The normality of digital attacks
these days on devices and that may
contain interesting information. This is something we have gotten
used to already. Every now and then there’s a
headline that — where it hits and a new politician or maybe someone in
the economy. This has become a background noise. It’s like the
weather, you know? Every now and then it happens. Every now
and then you have a storm. Very similarly similarly, it’s
now in the economic sector. If you can’t read it on the
left, this is a sign at the — the entrance
of an aluminum smelter. They had a larger cyber incident
that they couldn’t reproduce. And they basically then said at
the beginning of the sentence, they couldn’t find the source. What they said to the employees
is maybe can’t attach a computer to the network. It will maybe
get infected and we can’t do anything about it. It was
interesting. It appeared to be a trojan attempts blackmail from
the company. For those that don’t know, Mast
is the biggest shipping company in the world. And they went
through their headquarters and basically shut down all the switches that were holding
up their computer network just to be
quick about shutting down their network. And for any given point in time there’s a six figure number of
containers that are shipped around the world. And for the
majority of them, they couldn’t find them anymore after
that because they didn’t know anymore. This is an issue. A
lot of those contain fresh food. And if they’re in some port for
a few weeks because they don’t know what it is, then it just
goes bad. Well, but the interesting thing
is the Trojan that they’re using was actually not an attack, at least not for
ransom reasons. The current attribution is a
state attack. This is where we need to have
context for these kinds of attacks. Because for attacks in
a digital space, it’s usually very, very complicated to figure
out who is behind this. Every now and then we’ve — lucky
circumstances you can sort of figure it out. But most of the time it’s just
guess work. So, we’re just trying to analyze
some traits. For example, when the last update was happening
and it was during office hours of St. Petersberg. Maybe it was
the Russians. Maybe there’s a couple of
letters. So, it must be the Russians. Then every now and then you have
the case, okay, a similar one we have seen was done by the
Russians. Or at least we saw it there. And the next one that
looks similar also has to be done. So, the guys that
actually know what they’re doing, they become more and more
careful about this because if you didn’t look at the Wikileaks documents
leaked, the majority about the C CIA
documents where they were outlining how they were doing
this attack. There’s evidence in there that there’s a large scale effort to try to
acquire or at least steal software from
other nations to take it apart, to build it back together. And to pass it off as someone
else’s. Because they actually know all the tricks. Or the
ways that software is analyzed to try to attribute it to a
certain nation. And they’re playing this on a meta level. So, the whole game of guessing
who it was is not something that you
can credibly do. There are very few exceptions. Like the Dutch that actually
managed to intrude on a Russian attack
center by taking over the camera of the door. And in this
particular case, if you have that kind of insight, then
okay, in this particular case you can be sure that it was the
case. But this is very rare. So, if the NSA now says we
protected the campaign in the midterm elections from the
Russians and, of course, it’s all secret so we can’t tell you
anything about it. There’s no evidence I can show you. So, the — this is not like the
Russian hackers on their screens suddenly have a pop-up
that says, oh, yeah, we found you. We know who it was. We
know that it was you. But this is not how it works in reality. We just saw that there’s an
attack on our election. And not against not against the
electoral process, but rather against the counting. And this
software that is being used to count the votes is also a
problem here in Germany. For example, for the next
European election that we have here, this is also going to be
an interesting topic again. And if it ever came that far for us, which I don’t think so
because there’s obviously other methods that are being used to
convey the result, for example, via telephone, then we still
have that question. If we find a mismatch and
something happens, then there’s the question, who was it? Was it the Americans or the
North Koreans? The Russians? In our sector we have — we have
a dice that basically has a country on every side and we
just throw it and go see what it was. So, if we don’t get a grasp on
this IT security problem, then there
will not be an exit from this will der
wilderness of mirrors. And this is something that an
ex-CIA boss who basically went mad over this thing. Who got so
involved in this I know what you know that I know what you know.
This is so meta that he went insane over it. The complexity that we have
today is even worse. We don’t even no if it’s just
one national security agency that’s working against the other
one. It’s not the cold war anymore.
There’s N actors here, and any one can pass each other off as
someone else. And the main issue actually is,
and everything comes back tots fact
that our IT systems are vulnerable.
[ Applause ] So, there’s still more things to
come. For example, deep fakes. For those that don’t know, deep
fakes is the — is a way that can be
used these days to fake a complete
video including audio if up enough pictures of
the person that you want to appear in the video. And the
result is quite impressive. It’s actually hard to
distinguish from reality. So, there’s a lot of porn made
this way from popular actors that never did porn. But
there’s just enough material out that you can use to construct
this kind of fabricate. And similar things are already happening in the eastern
European states in the political sector. Where there’s enough Kompromat
being produced and they’re being so callous that they don’t care
about this anymore. Maybe that’s not as comforting. But in Africa there’s an
interesting case where there was someone who actually ordered a deep fake in
Gabun. The nation of Gabun. They produced a new deep fake
video about their New Year’s
speech. And it looked a little bit like
— so, he — he actually had a stroke
and they weren’t sure if maybe what they saw was just strange
because he had a stroke or if it was because it was fake. And
this is only just the start. This is already so close that
you can’t really pass it off anymore. Granted, it was because he a
stroke maybe. But this was just the start. So, deep fakes are this
ubiquitous, then it’s basically every video Kompromat worthless. People
can’t tell if it’s fake or not. And even if it’s real, then you
can still say it’s a deep fake. It’s another step in this
disillusion of reality. It’s falling apart. And you can — you can’t even
know if it’s true anymore. You can’t prove anything. So, the worst consequences of
the disinformation warfare is in
with strong ethnic conflicts like Sri Lanka
where they basically shut down Facebook for three days because
they didn’t see another way during this. In India every now and then
there’s a problem where someone on
Facebook puts out a fake news or alternative
facts and then that some people kill some
other people. And obviously in Sri Lanka and
in the rho Rhong iga areas, those are targeted. So, they try to create something
that shows a reality and is so filled with hate and creates so
much hate that it leads people in reality to kill
other people. And what this leads to is that lie becomes the
reality. And to the normal level of truth. And we can see
that amongst each other. Like in our somewhat civilized
society. We have the case where offices
of prosecution in their public communication lie. We saw that with G-20 where we
had — we kind of looked at it closely
and what would happen in the courts. Where like all these
bad and evil terrorist things slowly but surely are falling
apart. And will it become apparent that
police brutality is what actually happened. And does that stop other
policemen? Just the conflict that the fusion festival
has. They’re dealing with a police president that claims
things that are simply untrue. And it continuously tryies to
sew doubt that is not there. In order to get the festival to
open the grounds for police to patrol the actual festivals.
Because they’re not into people living their freedom on those
festival grounds. Like, this festival, for example, all of a
sudden there’s a bag control. Like there’s — and everybody
just accepts it as the new reality
has to be because of security. The purpose, well, what is this
for? What’s the purpose of this? And then all of a sudden because
bags are dangerous and the last 17,
18 years there was no problem. This acceptance of, I just claim something and then it must be
true leads to our own agency is something
that we’re giving up slowly but
surely. The advertising platforms that
we call social media have to — we just
have to understand that they’re not there to further your
knowledge and have a political discourse. We can just
give that up. Seriously. We can play there a little bit
longer and tell us that we can fix this
problem and with deleting methods or
content filters or upload filters we can solve this
problem there. But that’s just bullshit. It’s not gonna work. It’s ne’er
going to work. All the videos with sound there, all the methods we’re trying to
filter contents that we don’t want or other people don’t want
will be used by the other political side and then we’re
once again kind of complaining about that. So, what we have
right now in social media is the attempt request
500,000 people sit in a stadium where everybody has a megaphone and we’re trying
to have a conversation about abortions. It doesn’t really
sound very goal-oriented. And we need other means. We
have to talk about the power of manipulation of these media
platforms. And and I use this word carefully and I choose this
word carefully because social media, especially Facebook and
YouTube, but also Instagram and also Twitter to a certain extent
are made give people power of
manipulation and sell that power of manipulation. If you don’t believe that, then
click on Facebook and try to order
target advertisement and understand how precise these
mechanisms are. That is the goal of these platforms. The
only goal and this whole we want to bring the world together. That’s complete bullshit. They
care for advertisements. Same goes for Google. I’m not saying
that anybody is better or worse in this game. It’s about
creating as much power of manipulation as you can and sell
that to the highest bidder. And you can, of course, say,
but, yes, they also, like, brought about this great
Internet. And great. Wonderful. But that time is
kind of over. We have to start thinking about other things. We’ve reached that level where
this model of profit where hate is
becoming profitable is something that is
not compatible without a democratic society. And maybe
that’s something we need to address and change.
[ Applause ] I have this thesis that one of
the problems that we have on these social media platforms is that the differentiation between
individual communication, like one-on-one,
people talking amongst each other, and public communication
is being dissolved. Look at Twitter. You reply and respond to a post
of someone that you think is interesting. Or that you want to complain
about somebody’s wrong on the
Internet. Then half of the world is
watching you. And is also complaining and is getting on the bandwagon and is getting
really upset what party A or B has said. And then if you talk about —
with someone that you know and then other people will join that
discussion and conversation. And then two weeks later the
forest is still burning because people are not getting back on
the same page. That’s the kind of normal state
of being. Of our communication today. That we consider it as
something of publication instead of saying, all right. So, this
is something that I want to maybe talk about with that
person and then private message and that is it. And that’s
good. And that’s fine. And this kind of dissolving of
separation between private and public sphere. This constant
presentation of your private and self. And this constantly being
present, being pretty. Being consumable leads to human
communication being perverted in a sense. And that’s something
that you can maybe say, I can change that for
myself without actually having to do a lot. Just think about
it for a second. Think about what you’re doing. Is this communication with a
small group? With singular people? Or is it publication?
Is that something that I want even to see? This thought. Do
I want everyone to consume this idea and this opinion forever?
And this separation in your own head that you can make for
yourself. You don’t really need a platform for you to make that
decision. But I think that’s one of the fundamental problems
that we let these platforms dissolve
and like create this kind of merge of the two spheres of
communication. Maybe the following is more of a
question than an actual statement. Maybe we need smaller units. We
need communication in smaller groups with roles that you can
agree upon. I think the most interesting developments that we can see
right now maybe in the social media sphere
is mass mastadon. You’re looking for a server, it
follow ascertain set of rules. For example, no Nazis here,
here, up to like this is only for Nazis. All exists. All
out there. And you agree upon a selection
of your server and you agree upon what you kind of want in
this sphere. What is kind of the common sense. And then you can still think
about if the pure vegan server is
compatible with the server if you maybe don’t
really like each other and you don’t go into a federation
together. But you can still get connected if you want and you
have the advantages. But you’re building groups where they have
a consensus, a common sense amongst each other.
There’s certain things that we agree upon. And we don’t have to waste time
to find out and figure out who is
for or against free markets. Or free market economy. Sorry, my voice is breaking a
little. I have to — okay. Maybe you think about this for a second if this is something that
is an alternative for you to just want
to sit in a stadium with the whole world straightaway. And
maybe the people that you know and like are enough. Thank you
so much for your attention. Thank you. Thank you very much.
[ Applause ]>>We’re going into the
intermediate announcements now. [speaking in German] .
. .
. .
. .
. .
poo . .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.>>Welcome back to stage one. [Speaking in German] German] — here is Cory
Doctorow. [ Applause ] CORY: Hi. Thanks. So, whoops. Gone one too far.
Yeah. So, hi. I have been fighting over Internet
regulation since 2002. And back then the big issue was
whether Napster should be liable for things the users did. And
back then it was a real struggle to convey to people that the
outcome of the Napster fight was about so much more than access
to music. It was about whether the network that would grow to
be the planetary scale nervous system for the species
would be regulated like a troublesome jukebox. And as the
years went by, something changed. People woke up to the
fact that the Internet wasn’t just a glorified video on demand service or the world’s
greatest pornography distribution system or the best way ever conceiveed
of for recruiting Jihadis to commit acts of
terror, it was instead important to political and civic engage,
their play and business. And it didn’t mean we couldn’t
ever hope to regulate the Internet,
but we needed to regulate it carefully with
gravitas. And a sense that any misstep
would have unintended consequences that would ripple
out into many domains of our lives. I held out hopes that we might
regulate the Internet in the future, but we might reverse the
stupid mistakes of the past. Like the rules that ban
circumvention of copyright protection systems. That was to add legal weight to nuisances like region codeing
and DVD players. The way that it works is
removing a copyright lock substitutes an infringement even
if you don’t violate copyright. And it had to work that way,
de-regionalizing a copyright infringement,st it the opposite
of a copyright infringement. If you buy a DVD from the store
and pay the price they asked it, and take it home and watch if,
that’s the opposite of piracy. That’s what copyright says you
have to do, you pay the price you’re asked and enjoy it in the
way you’re expected to. The only way these
anti-circumvention rules can be effective in stopping you from
de-regionalizing your DVD player is if they ban
removing a copyright lock even if you didn’t infringe
copyright. So, even so, it wasn’t hard to
remove these locks. They needed a legal backstop
with a legal prohibition as well as a technical measure.
Originally it was fringe applications like the DVD players and the
Sega Dreamcast, you had a copyright
lock to protect something not protected by copyright. But any
time, anything with software in it grew a copyrighted work in
it. All software is copyrighted. Once you have a
device with software inside of it, you can add a copyright lock
and all of a sudden these anti-circumvention laws like
article 6 from the European copyright directive from 2001 and the digital
copyright act ban changing that device to do legal things. Over time, we started to see
copyright locks creeping from DVD players
and Sega dream casts come into our lives. They are in defibrillators, in
thermostats and coffee makers and in voting
machines and in tractors. And these laws, they’re
attractive nuisance. They promise manufacturers, if
they skin it in digital rights management and design them that using the
product in way that the manufacturer doesn’t like
involves removing the copyright lock, then they can make it
illegal to use a product in the way that a manufacturer doesn’t
like. And in the US, violate this law,
it’s punishable by a five-year prison sentence and a $500,000
fine far first offense. What that means, if you use a copyright lock judiciously, you
can boop strap a bootstrap a new regime, the
felony model, where ways that benefit them instead of your shareholders
becomes a literal crime and the taxpayers pay to enforce it.
But still these laws, they chill security research. Because when
a security researcher reveals there’s a defect in a system
with a copyright lock, they make it easier to remove that
copyright lock. And so, security researchers treat these
as a no-go zone and they don’t want
to talk about the defects in these systems. And so, now we have this
ever-expanding constellation of devices that treat owners as
adversaries. That security researchers can’t audit. That
good guys know about bugs in, but they won’t tell you about
them. And bad guys know about bugs and
exploit them forever because the good guys never tell you that
the products you’re using are defective. These laws were ripe
for reform. But as the years went by and
people got online and network policy touched their lives. The absurdity of copyright law,
this regulatory for the entertainment
industry being the one tool to regulate our online laws, it was
foolish. And it seemed like maybe it was ready for a change.
Maybe arriving at a moment where we would start to take network
policy seriously instead of treating it as this
evidence-free zone. But then the last 12 months happened. And the last 12 months have been a nightmare assault, a blitzkrieg of laws
ushered into the law with the worst of faith. We had the Australian ban on
working security systems, America’s ban
on the online discussion of sex and sexual assault. And the
European ban for online filters for all expression. And after the Christ
Christchurch attacks, we have global initiatives to take down material that is
allegedly terrorististic at the drop of a hat so that humans
cannot review the material before they are taken down. These are already law in
Australia and steaming towards becoming law in Europe and in
the UK. The last year has been a real-time high-speed Chinafication of the
western Internet. The networks are subject to surveillance and
control that has not been dreamt of since the old days when every
communication we had electronically went through one
state-owned monopoly telephone company that had total control
over who spoke, who they spoke to and could monitor any one of
those calls. But unlike those days when the telephone
was wired into your kitchen and attached to the wall, now the
telephones are in our pockets and know everywhere we go,
everything we do and even we talk to. And those are now
subject to a kind of control that even exceeds the control that we had in the day of the
telecom monopolies. And unlike those days, if you want to spy
on all of our communications. You don’t need to hire a giant
room of Stsasi niches, you can get an algorithm to listen to
everybody. And I want to take moment before we go on to how we got here to review
in detail all the terrible stuff that’s happened in the last
year. Because in 2019 news comes at you fast. And it’s
easy to miss things as they happen. So, let’s start with a ban on
working cryptography. Since the 1980s, law enforcement and apparatuses have been warning
that if everyone had access to
cryptography, terrible things would happen. Until 1992, the
American security agency classed any working cryptography as a working
munition and refused to allow civilians to have access to
working crypto. Instead they had a scrambling
system, Dez-50, they said that foreign
spies and criminals would never break it. But it was so weak
that the NSA could break into it to spy on bad guys
if they absolutely had to. And in 1992, the American
frontier foundation represented an undergraduate at the University of California Berkeley, DJB, now a famous
cryptographer. He was publishing it as open source, as
free software on the early Internet. He was publishing it
on usenet, which was a messaging platform and posting source code
to it. And we argued to the ninth circuit and the ninth
circuit court of appeals that publishing source code was a
form of expressive peach speech and
protected under the first amendment. And the ninth circuit agreed,
the appellate agreed, and they didn’t like the chance the at the Supreme
Court. We have access to working cryptography. It works so well, if you take a picture
if the full disk encryption is turned
off, it is now, that before it’s saved,
that if every hydrogen atom were turned into a computer and did
nothing until the death of the universe to guess how to
unscramble your picture without asking you for the password, we
would run out of universe before we ran out of possible
unscrambleing keys. So, cryptoreally works. And today every one of us uses
strong encryption. Every network automatically encrypts
our data between us and the router. Every Bluetooth automatically
encrypts our key strokes so they can’t be
intercepted by people standing nearby us. Whether your
computer or printer or another device receives a software update, we use crypto to make
sure that it came from the software manufacturer and it
wasn’t inserted by a malicious party to attack you by
compromising your device and steal its data or
compromising your device and breaking into other devices on
your network. Crypto protects the
integrity of your data and communications and it protects
your devices. It’s what stands between you and identity theft
or someone murdering you by compromising that computer in
your car or in the defibrillator
immigranted in your chest. Which is great news if you want
to protect yourself from malicious attacks. But authoritarians have never
given up on the project of breaking working crypto. They cite the four horsemen of
the information apocalypse when they say it’s time to get rid of
the crypto. Those are terrorists, pedophiles , criminals and drug dealers.
Thing they can ban working crypto, but still keep people safe from
criminals. Turn bull in Australia was going
to ban crypto. But they would have cryptothat
protected them from bath guys. When they said it would violate
the laws of mathematics to develop a system that worked perfectly when bad
guys were attacking it, he said the stupid
stupidest thing in the history of Internet regulation. It’s a
crowded field, but he wins. The laws of Australia prevail in
Australia. The laws of mathematics are commendable.
But the only law that applies in Australia is the law of
Australia. Now, you laugh. And you should laugh because
it’s so risible that a grown-ass adult
in a suit with high office said this thing. But a year later,
Australia banned working crypto. And they passed a law that
requires any technology company with a
Nexus with the Australian government to insert back doors into their crypto on
demand. That’s what’s happened in the encryption wars in the
last year. Now let’s talk about everyone’s favorite subject,
sex. The world’s democracies, when the Internet came along,
they needed to figure out how to relate to the Internet. What
the liability regime would be. And the answer is something
called safe harbors. And the way that safe harbors work is
they try to preserve a platform for speech. But also create a
streamlined system of justice for people who are harmed
by that speech. Let’s think about how this works in the
offline world. In the offline world, people who are party to
bad speech are often liable for it. So, if you’re a book seller
and your bookstore has an infringing book, under many copyright systems you, the
book seller, can be held liable for selling the infringing book
even if you didn’t know it was infringing. Even if the publisher promiseed
you it was in compliance with copyright law. So, people aggrieved under
copyright law, they can pick the person in the value chain with
the deepest pockets, the publisher, the printer, the
book seller. They can go after one of them or all of them and
get justice out of them. Now, that would be really hard for
people who provide online platforms for speech. Because
you couldn’t possibly hope to evaluate everything that a user
said on say Twitter or on your local bulletin board system for
people who like cats. And knowing everything posted to the
bulletin board complies with copyright law and hasn’t been
pasted by another cat board from someone who really liked a
message and wanted to pass it off as their own. It would be
impossible to have a platform for speech if you were on the
hook for copyright. What safe harbors do is they take platforms off the hook for bad
speech access by their users. But people who are damaged by
those user’s speech, they get something in return. I used to
be a book seller. And I worked in a bookstore. And if we had a
book that you thought infringed your copyright, you could not
walk into the bookstore, point at the book on the shelf and
say, that book infringes my copyright. I demand that you
remove it right now. The minimum wage clerk behind
the counter would say you’ll have to talk to my boss. And
the boss would say you have to talk to a judge. Once you have
a court order, we’ll remove that book for you. And safe harbors
do away with this due process as well. They allow people who
have been harmed by bad acts of speech to simply present a
claim to the online platform and usually to some — the
equivalent of me behind the cash register. Some minimum wage
employee. Often in a Pacific rim data
center or, you know, call center. And they get an email that says
this infringes my copyright, this is harassing me. This is
doing something else that substitutes a bad speech act. I
want it removed from the Internet. And more often than
not, it just disappears without having to talk to judge. This
is the balance that the safe harbor struck. Immunize platforms from the
users, and give users the chance to remove the speech if it harms
them without talking to judge. If there’s a disagreement there,
then it goes to court. This is a wildly defective
system, but we have a worse one to replace it with. In the US, the safe harbors
regime, it didn’t protect companies that knowingly allow
sex trafficking to happen on their platforms. If you had
actual knowledge that one of your users was
engaged in sex trafficking, you had liability if you didn’t shut
it down. But that wasn’t enough. In 2018 the US Congress passed
Sesta-Fosta, depends on whether you like Senate or the Congress
more, and it says you have an obligation to know whether any
of the speech on your platform involves sex trafficking. It’s
not enough that if you know about it you have to take a
down. Now you have a duty to find out whether there’s sex
trafficking taking place. And this is a duty that’s very hard
to fulfill. Because it’s hard to know a priority if you’re on
a message board where consenting adults are talking
about having sex with one another whether one of them
isn’t consenting. Whether one of them is being trafficked.
That duty turned out to be so onerous that virtually every
platform in which people discussed sex and sexual assault
for the purposes of consummating a sexual relationship in the US
shut down. And this has been a great harm
to actual sex workers who now tell us that because they’ve
lost every forum in which they warned each other about clients
who are client or dangerous, that they face more
violence. And they can no longer arrange acig nations, they’re walking the
streets again. The only people whose lives have
been made conclusively better are pimps. Now that sex workers
are working on the street again, they need to have pimps for
protection. This had been has been a golden age for pimping as
a result of this bill that was notionally intended to protect
people from human trafficking. Sex has become the go-to means
for attacking privacy, anonymity and speech. Starting this
summer in the United Kingdom, every platform that provides
access to erotic material, adult material, will be required to
validate the age of every user who accesses that material. If
you’re a UK-based site and you don’t adhere to this, you will
be punished under British law. If you’re an offshore site and
you don’t adhere to this, you’ll be blocked at
the border using the national firewall. And these age checks
will require a credit card so that when these
databases leak, not if these databases leak, when these
databases leak, the attackers will be able to cross reference
your sexual fetishes with how much money you have and whether it’s worth
blackmailing you. And — and so, that’s what’s happening in
the UK. And then there’s Tumblr. Poor Tumblr. Tumblr was once a haven for
sexual expression. Including notably women’s sexual expression and sexual expression
by LGBTQI people. And in 2018, Apple got really
worried about Cesta-Fosta and we’re not
going to carry the Tumblr act anymore because it might have
sex trafficking. This was what Tumblr needed to
root out anything that might be sexual content on the system. But algorithmic filters are
crude and imperfect instruments. These are all items from my own
Tumblr that have been blocked by Tumblr. I want that
particularly call your attention to the one on the far left
there. That is Tumblr’s image, example
image, of the kind of nudity that Tumblr will not block. It’s been blocked by Tumblr.
Yeah. So, speaking of filters. Last month the Europe Union took
the most drastic step to censor the
Internet in the free world. They passed the copyright
directive, the dreaded article 13, now
article 17 because nothing is easy in the European parliament. It’s a rule that obliterates the
safe harbor not for sex trafficking,
but for copy right infringement and makes the users liable if
they have copyright infringesment. Even if you
think they don’t have — if you don’t have actual knowledge of
the infringement which creates a duty to monitor everything
your users post to find out whether or not they’re violating
the copyright directive. Now, that has to mean filters. Of
course it means filters. YouTube alone gets 400 hours of
video every minute. There aren’t enough human beings who
understand copyright on Earth to review all of
the footage that the users upload and make a determination
whether all that have footage, any of that footage infringes
copyright. And despite the fact that this
obviously meant filters. During the debate, the
proponents insisted that it didn’t require filters. In
fact, they said right there in the directive it says if at all
possible, don’t use filters. Well, all right. If I made a law that said I
require you to produce a large, gray,
chart chartmatic African land mammal
and it must have four legs and a tail and tusks and a trunk. But
if at all possible, it should not be an elephant. I would
still get an elephant. And if we say the directive
requires that you know about all of the things your users post
and make sure that they don’t infringe copyright and you have
users who post more than nay human moderation team could
review, you will always get filters. And now we know, of
course, that it means we’re going get filters. Because
within days of the directive passing, the
French government announced that it would transpose in into law
with filters. And then the German government admitted this
would probably need filters too in the national implementation.
And then the commissioner who pushed for this said of course
it was going to require filters despite what he said earlier. So, in 2017 Germany became one
of the world’s great net exporters of
privacy rules. In 2019 Germany and France teamed up to take
away from America the world cup for exporting the world’s worst
privacy and censorship regimes. Everything we post from now on
is going compared to a database of
copyrighted works and blocked if it looks too much like a
copyrighted work. But when I say it’s going to be matched
against a database of copy righted works, that’s not quite
true. It’s matched against works that someone says are
copyrighted. But there’s no checks and balances in the
system to determine whether or not all the works that are
include ready copyrighted. There’s nothing to stop someone
from claiming the works of William Shakespeare or the
alphabet or the song happy birthday. And it’s blocked
until someone goes in and pulls those records out and makes a
determination and pulls those records out. When we propose there should be
a rule that punishes people for
deliberately or adding to the lists that the copyright is,
that was shot down in flames. After the directive goes into
effect, you might have images of a cop
hitting a protester at a demonstration disappearing because of a
spurious claim. Or dumping toxic waste into the
drinking water, down. And to get behind the materials that
are unjustly blocked and ask to have it unblocked. But there’s
more. Because just two week was a this directive passed, there
was an act of terrible white supremacist
violence in New Zealand where a terrorist
shot up two mosques in Christchurch. In the wake of
that attack, live streamed on Facebook, everyone in the world
has been getting behind punishing the platforms for
letting this happen. So, in Australia, they passed a
rule requiring them to remove
material of a terroristic nature in an hour. In the European
Union, advancing the same regime and the UK and the
mother of all parliaments tabled it towards making it a reality
there. If you think this sounds reasonable, anything that it
takes to remove terrorist material from the Internet is
probably worth it given how horrific that attack was.
Given last month the anti-terrorist police sent a
note to the Internet Wayback Machine and host open
action collections and said we have identified within these three special
collections terroristist works and we
require that they be removed. The first is the Gutenberg
archive, every public domain book ever
scanned, 1500 text files and the
collection of grateful dead recordings. They gave them 24
hours to sort through this. But under the impending EU
regulation, they would have one hour to make
sense of it and blocked at the border if
they fail to comply. They’re not signed by a judge, they’re
not reviewed by anyone. And the European Union terror
regulation originally included filters as a requirement. They
were removed at the last moment. But again, if you’re going to
get rules that require you to sift through tens of millions of
documents in an hour. There’s no way to do that without
filters. Almost overnight we have gone
from an Internet where speech had the presumption of
innocence, where speech was innocent at least until someone accused it of being guilty, to
one in which all speech is guilty until proven innocent.
Everything we post is run through black box algorithms who
make a judgment in the dark, whose deliberations we’re not privy to, if they fit into
sex trafficking or terrorism or
copyright infringesment or extremism. And this all
happened on our watch. And it happened in large part because
we hate the platforms and we hate them for good reason. No
one is happy about our Internet future having turned into a
world that consists of five giant websites filled
with screenshots from the other four. But ironically, our desire to
punish the platforms is a reward in disguise. Think about how
facilitiers are going to play out. Google has already spent $100100
million building a filter for YouTube. Called content ID. It
does a small percentage of what article 13 envisions and they
can adapt it for that purpose. That’s why during the copyright directive debate the CEO of
YouTube says although we object to the broad strokes, we think
filters are a good idea and we can make them happen. While Google’s preference is to
have no regulation, the second is to have regulation that’s so onerous so
that Google can only comply. There aren’t others with 100
million Euros when they become liable to enforce under it. At
the stroke of the pen, the European Union signed the death
warrant for the entire tech sector and that
leaves the field open for a handful of US-based
tech giants who can afford filters to take over Europe’s Internet. And the
U.N. porn block aid, it’s going to be
mind geek, red hub, men. com, reality kings and Shawn
Cody. This grew by taking any competitor that it had its eyes on and
allowed clips to be loaded on to the
platform to deprive them of the revenue from
the pay walls. Waited until they were brought to their knees
by not getting the payments and bought them by pennies on the
dollar. Now they’re in charge of all the competitors that it
hasn’t yet taken over and it will get to determine their
destiny too. Is it any wonder that Mark
Zuckerberg went to Congress last month and said please regulate
Facebook. He’s betting with Facebook at the table, any
regulation that Congress makes will be regulation that Facebook
can afford. Congress is not going to make a rule that says
no Facebook. And none of the competitors will be able to
afford. How did we get here? How did the tech sector
become so totally concentrated? Well, it’s not just the tech
sector. Every sector has become concentrated over the last 40
years. There are four movie studios. It was five until last
month. Now FOX and Disney are one. Four record labels, five
publishers for now. Soon there should be four because Simon and
Schuster will become a division of Harper Collins. And there’s
one eye wear company left. If you’re wearing glasses, look
who made them. If your glasses were made by Armani, Brooks brothers, Chanel, coach,
Michael Kors, Polo Ralph Lauren, or you
bought them at Target optical or
insured by imed vision care or made by the largest maker of
lenses and contact lenses in the world, they call all came
from one company in Italy. And not only that, there’s only one
wrestling league left. There used to be 30 of them. Now
there’s one. It’s worth $3 .5 billion. It goats class the employees as
contractors and not get them medical care. They’re dropping
dead in their 40s. And when you ask how they got so
big, they’ll say tech has first mover
advantages. That doesn’t explain why not
tech got so concentrated. Did wrestling get the first
mover advantages? I think we need to go back to a very
special year. 1979. That’s the year the first commercially
successful personal computer hit the market. And that’s the year
that this guy hit the command — hit the campaign trail and made a successful bid for
pregnancy — for presidency. Beg your pardon.
[ Applause ] And Reagan wasn’t alone. Reagan
was part of a cohort of politicians that year. Margaret Thatcher in the UK, Pinochet in Italy, didn’t get
elected. they took power in which the
amount of wealth in the richest people on the Earth rebounded
after the war. The capital destruction of World War II left
those poorer. And since the very rich had a lot more to
lose, they were poorer too and lost their grip on the levers of
power. But by 1979 or so, the amount of wealth that they held
had reaccumulated to the point where they could start making
their policy prerogatives felt and start expressing them in the
public sphere. Now, Reagan subscribed to one particularly
bizarre theory about how our economy should be regulated. This theory of antitrust law
that said that monopolies were fine. That antitrust’s goal
should not be to break up monopolies. But make sure they
didn’t harm consumers by raising prices.
And under Reagan and his successor, antitrust has been
dismantled all over the world. In the time that I have been
alive, it was once illegal for companies to grow by merging
with their largest rivals. It was
once illegal for companies to be vertically integrated. To own a railroad and a freight
shipping company. You could run the railroad or ship the freight
that the railroad carried but not both. And companies were
not allowed to grow by buying up their emergent
competitors. Well in the last six months
Apple bought 20-25 small companies.
It does that every six months. It buys companies more often
than I buy groceries. Today we’re asked to believe that
something changed in the technology to makes monopolies
inevitable. That it’s to do with tech, and not with law. But every one of these
monopolyies grew by doing things
radioactively illegal under antitrust law. Google has not
had one significant new product that they have been able to
launch internally besides Gmail. They do it because they bought
another company that had been founded. Or think about
Facebook. Facebook tells advertisers that it spies on us
so much that it can tell exactly what we’re going to do.
And that it can use its machine learning systems to convince us
of anything an advertiser wants us to believe. Facebook tells that they’ve
built a machine learning mind control array. But what family
member really has is detailed non-consensual dossiers
on 2.3 billion people around the world and they can target us.
If you’re trying to locate someone with a hard to find trait,
looking for a refrigerator. The median person buys like 1.2 in
their life. Then you can ask Facebook for people who searched on refrigerators,
bought a house, have money in the bank and you can advertise
to them. And instead of the success rate when you advertise a refrigerator on
the billboard on the motorway, 0.0001%. You will get the much improved
rate, 0.001%. That’s orders of magnitude more
efficient, but hardly a mind control array. And benefits Facebook to say
they have a mind control array. People want services because
they want that. Cambridge Analytica said
they figure the out to perfect the mind control array. Point
it at decent people and make them into Trump and Brexit
voters. But isn’t it more likely that what they did was
find the profiles to find racists and convince them that
Trump and Brexit would be what they wanted? We know that every
claim that Facebook makes is a lie. Don’t take my word for it. That’s what the Secretary of
State for New Zealand said in Parliament last month. So, why
do we believe that they’re telling the truth in their sales
literature when they promise us that their product works really,
really well? And even when you see a
Facebook-like company that manages to change our behavior
in these great big ways, what we find is that very quickly
they regress to the mean. The farmville apocalypse, they
disappeared into farmville and didn’t emerge for three months.
But they did. We get a callous over our attention. Some people
don’t. That’s why there are people in the casino wearing adult diapers
spending their children’s college savings on the slot
machine. But most people do, and it’s hard to build a business as low margin
as Zynga out of the tiney rump of people
that remain vulnerable out of the rush. Which is why they
never managed to replicate farmville’s success. There’s a farmville it 2, but no
one plays it. People can’t escape Facebook, it
homeds their friends hostage. There’s nowhere else to go. 15 million Americans age 13-34
quit Facebook last year and went to Instagram. Which is a
company owned by Facebook. Now, that whole Instagram story, it’s like a fairy fairy tale why we
shouldn’t have screwed up antitrust law. The first ten
years that Facebook was in existence, it was the
pro-privacy alternative tomy Space and others. We have to
have a walled garden or the bad actors will ingest your data
and target you with advertising. The first targeting
advertisement, the beacon, they issued a groveling
apology, and they said they would never profile us
again. But each time a competitor died,
Facebook expanded its surveillance. And each time it
did, it took a beating in the public sphere and rolled back, but to a place that was
surveilling more than it was when the last competitor died.
Facebook dominance is near total. But the market is very
lucrative. When you have 2.3 billion users, you have a big
target on your back. There’s lots of companies taking aim at
them. And one that’s made a go of. And it’s a company, we’re like Facebook and everything is
pro-privacy. And it disappears before they can mine it. It’s
called Snapchat. It’s the only competitor that’s managed to
survive Facebook. It shows you that people apt
want is a place with their friends and not spying. That’s
why Snapchat is to successful. People left Facebook by the
millions to join Snapchat. What did Facebook do? They bought a spyware company,
Onovo, it made a product it deceptively
described as a battery monitor. It morphed into a VPN that spied
what you did with your phone. It surveilled what people were
doing with Snapchat and identify that
acquireing Instagram would be a good hedge against the
departures for Snapchat. And used it to refine the
features that the Facebook users were doing with Snapchat. So, today despite a market
hungry for privacy, we have no privacy
social network. Facebook owns the users. It’s not gonna let
them out. When Facebook started, the majority of
Facebook user were actually — or potential Facebook user were
on a rival service called MySpace. And although you might
have liked Facebook’s features better, the only way too far the
features was to talk to friends. And if you were on MySpace, you
couldn’t leave. Facebook made a tool that would log into MySpace
and pretend to be you. Give the credentials, it would
fetch the messages and put them in your Facebook inbox, let you reply
back with a footer that said I sent this
from Facebook, why are you still
using MySpace? That worked great, MySpace
Whittered. And then power ventures. Take your waiting
Facebook messages, put them in an inbox and let you
reply to them and send them back out
again and Facebook sued them under the computer abuse act,
they paid a lot of money for legal services to support the
theory. They put power ventures out of business. And no one can
do to Facebook what Facebook did to MySpace. Facebook is a behavioral
modification device for sure with one trick
up its sleeve. All it can do is make you look at Facebook.
That’s where your friends are. You want to talk to your
friends. You have to take your phone out
to see what your friends are doing. It derives that by hijacking our
social relationships. And Facebook has a targeting system
that works better than untargetted ads. But performs
badly. It has of it to have a lot of ads to generate a click.
You and your friends don’t have enough to
talk about on their own. So, it has a — an engagement
mechanics myization algorithm which in 2019, they non-con
scientially mind- fuck and make more clicks. It can make you
angry. And you sit there being angry,
wishing you can talk to your friends. And you’re seeing
refrigerators ads, and if they work one in a million times and
get you to click a million times, they’ll sell one refrigerateor.
[ Applause ] So, this is the tragedy, right? We’re being satisf spied on and put to risk not
because it works so well, but because it hardly works. The
reason Facebook has to have new ways to spy on us is because
each way that it finds to influence our behavior, we
quickly become inured to. It has so little use for the data
that it needs to aggregate giants
minds to get the tiniest behavioral effect. We are being
sold down the river not for millions, but for pennies. Now, a big tech apologist. They
say the reason the industry is concentrated is because tech is
different. I don’t know about you, but it sounds like it you take away
anti-monopoly protections and companies do those thing that were prohibited, maybe the
answer was that the anti-monopoly protections were
uses and not that first user advantage and global markets
have suddenly changed the fundamental laws of economics.
Now, I think that maybe we could try giving those old pre-Reagan,
pre-Helmut Kohl, pre-Margaret Thatcher
rules a try. Break up the platforms. If Facebook was a
lot smaller, then the stupid mistakes wouldn’t be so
consequential. You couldn’t reach 27 # 2.3
billion as terrorist. It’s not just the place for
terrorist videos, but it’s the play to go to turn others to
terrorists. There’s truth in that, by
showing controversial material and the ability to targets users
receptive to messages of hate speech and extremism, the platforms makes
it easier than ever to recruit
people into movements. We should be asking ourself,
where are there so many found to be vulnerable to messages of extremessist
radicalization center? From anti-vax? And think about
Reagan. 40 years of neoliberal policies have made the richest
so much richer and everyone else so much poorer. And one of the
results of this is that our institutions no longer operate
on the basis of evidence. Rather they operate to enrich
the highest bidder. When off you have a
truth-finding exercise to link carbon and
cancer or opioids and I diction or
copyright and filters, it’s not paranoid that the governments
reach over and over again seems so wrong is because the people
we are paying to collect evidence are in the pocket of
the people they’re acting as watchdogs over. This is a break
down. What I call the epitemological crisis, it’s
whether we know it’s right or not. It’s not foolish that big pharma
is giant and concentrated and
ruthless and willing to murder the people it’s helping. Look at Purdue pharma and the
other opioid vendors who spent a
decade saying that Fentanyl will Oxycontin
were not addictive and they were undertreating pain, and they were bribing
doctors to write precipitations. Kicking off an epidemic that in
America killed 200,000. More than the Vietnam War. It’s not foolish to say why
trust the pharma companies? And it’s not foolish that the
regulators are letting them get away with
murder. When an industry has four or five companies in them, anyone to
regulate it probably from the executive suite. The good F CC one for Obama,
Comcast lobbyist. The bad one that Trump
appointed, Verizon lobbyist. There are no people in that seat
who aren’t an executive at one of those companies. And anyone probably worked at
two or three of the other ones and ones and married to someone
working at the remaining two. It’s inarguable that they are
corrupt and the regulators are letting them get away with
murder. It’s not unreasonable that we can’t trust them or the
regulators when we tell them they are unsafe. Vaccines are
safe, vaccinate your kids and yourself. And though I happen
to disagree with them on their conclusion, their logic is hard
to fault. In the 21st century, people look to conspiracies to
explain the fact around them because so much of the bad stuff
around them is a result of a conspiracy. Whether that’s Exxon covering up
the own research that said its products were contributing to
climate change and would make our planet incapable of
supporting human life. And wiretapping the world’s Internet
with the cooperation of big tech companies by the NSA who swore
it wasn’t happening. It’s not cheap. The authorities who help
them along expect handsome awards, jobs and industry.
Campaign contributions. This is the sort of thing you can only
pull off in a guilded age where wealth and power have been
concentrated and industries have concentrated along with them.
We won’t make conspiracies implausible until we get rid of
conspiracies. And we can’t do that until —
thank you. We can’t do that until we tackle wealth
inequality. If we punish the platforms for their monopolistic abuses by imposing
new duties on them that normally
states would perform. Constitutes to monitor their
users, we cement their dominance. Once we dreamed of a Internet
that was democratic. Anyone could make an Internet tool and it would exist as a
poor among all the other Internet tools on the Internet. But if we invest platforms with
state-like roles, letting them grow to be so big that we wouldn’t break
them up. They would be too small to
enforce article 13 and the copyright directive, then we are
ending democracy and we’re putting in place a
constitutional monarchy where they get to rule
forever with the divine right of kings. And it’s checked by an
aristocracy by regulators from the executive suite who ask them to kindly don golden
chains and restrain themselves through no
bless oblige. We can have a purlistic Internet made up of
small companies and coops that serve users or deputize big tech
to perform state-like duties, but not both. That’s why it’s
so dangerous to hear people saying things like, War
not paying for the product, you’re a product. If you spend
a thousand dollars on an iPhone, Apple will use the
anticircumconvention rules to make sure that you only get it
fixed by them and buy apps from them to take 30% out of
every sale. And they will use that to fight and kill right to repair laws around
the world in the EU and in Ontario
where I’m fro. You’re the product. Google makes money
spying on you, Apple by locking you in, Facebook by locking you
in and spying on you. Adding price tags to today’s free
services will not make them more accountable. And if we are living under
conditions of gross inequality, making it on their ability to pay will not produce
a more pluralistic or democratic society. Here’s what
I’m trying to say, tech know exceptionalism is garbage. Tech is it notten not because
there’s something intrinsically rotten about tech. But
everything is rotten. And tech grew up in lock step
with the rock. We need to make it smaller, ban
the monopolies and say you can’t acquire competitors, to crack
down on vertical integration. This has been a hell of a year
for the Internet and for our planet and for our species. And
we have some major challenges made of us.
Climate change, misogyny, white nationalism. And while I’m
skeptical of the tech exceptionalism that had says we
have these exotic causes that cause
tech to be concentrated. I’m a tech exceptionallist in
one regard. I don’t believe the Internet is what we fight for.
But I believe it’s what we fight with. The Internet — the
Internet is the terrain on which every battle that’s coming will
be fought on to win or lose. If we lose the Internet, we will
lose those battles before they’re fought. We are focused
on what technology does. But to solve the problems of tech, we
need to focus on who it does them to and who it does them
for. We once dreamed of a democratic
tech future. Taught kids STEM not to be ready for the job
market, but you have to technological self-defense so
they could program not to be programmed. And in democratic antiollie
embarrass Gibbing future is possible if we seize it and
demand a future where we are neither spied
upon nor locked in. A future where technology sets us free
and never puts us in chains. Thank you.
[ Applause ]>>Thanks. So, thank you.
Thanks. [ Applause ]
So, thank you. [ Applause ]
We have about eight minutes for questions. I like to call alternately
people who identify as women and non-binary and then people that
identify as men and non-binary. There’s a pause, speaking as a
man, the men are thinking up a question making them sound smart
and the women are paying attention. But sometimes if I
tell that joke, enough time there will be a woman or someone
who is non-binary will ask a question. If you can raise your
hand, we have a mic runner. I can also vamp for a while
before we get to it if someone wants to think of a question.
Do we have one there? All right. Thank you.
AUDIENCE: Hi. What are the best options for the elections of
parliament in Europe to help us with this problem?
CORY: What are the best options for the parliamentary options in
Europe. AUDIENCE: Who should we vote
for? CORY: I don’t know who to vote
for. spoke with a colleague today
asking anyone who wants your vote whether they will refer the
copyright directive to the European court of Justice
for a look at constitutionality. I think that’s a good heuristic
anyway. I vote in the UK by proxy. I just instructed my proxy who
to vote for. And my vote is for the slate that’s put forth by extinction
rebellion. Whatever that’s worth. Thanks. Ma’am?
AUDIENCE: Thank you for the inspiring talk. I’m following
up on the seize the means of computation. I’m dreaming of a — or capitalism?
CORY: I think our approach to how we should run our society
should be on an evidentiary footing, right? So, I think the question you’re
asking is what’s best. But I know how we find out. This is a
bit like the scientific method. With the scientific method we
say not what is true, but how do we find out what’s true? And the way we find out what’s
true is being having pluralistic, rigorous and
accountable truth-seeking systems of governance. And the pre-condition for an
accurate and accountable truth-seeking exercise is that
there can’t be one player in the room that when they speak
everyone else has to listen to them. Because they will always
clobber the truth because the truth is always antithetical to the pa revokecal
parochial goals of one giant player. My goal is first we
need to have pluralism. We need to have pluralism in
wealth distribution and in industry. And from there, our
regulatory apparatus will once again be able to establish
different answers to different questions. When do we
need markets? When do we use planning and so on? But we
can’t hope to answer the questions unless we have a
rigorous process. It’s as though we’ve reverted to the
days of alchemy. In the days of alchemy, no one
exposed their conclusions to rigorous reviews to
third-partyies who disagreed with them. So every alchemist drank
mercury. No one told them I think your
evidence is poor. They didn’t have to be accountable to an
outside source. We have close equivalence in the policy
sphere. Anyone who says, as Justin
Trudeau did that Canada should continue to burn the tar sands
because, quote, no country is going to leave $2 trillion
worth of oil in the ground, right? That that’s just
drinking mercury for the planet. And in West Virginia where Dow
Chemical is the largest player and the largest industry,
chemical processing. They just had an evidencery
hearing on whether the national levels for
chemical runoff in the water should be made more
relaxed in West Virginia. Which obviously they shouldn’t. And Dow presented evidence and
they said they should be because the national levels are set
based on the average body mass index of the average American
and in West Virginia they’re so much fatter that we can poison
them more before they get sick, right? So, this is what you get when
the evidentiary process is
dominateed by a single paper. Someone handed Dow chemical a
paper and you are going get your goal if you write anything in
this box that says reason. And they drank some whiskey and came
up with a reason. West Virginians are fat, right? Give
us our chemical runoff regime and and so, I think that like
presupposing the answer, it’s ahead — it’s
ahead of the — it’s premature, right? That what we should have
first is a mechanism that reliably produces
good pluralistic answers with consent of the government and
legitimacy. And we’ll figure out whether
we’re living in a narcosyndicalist
society or whatever. Thank you.
AUDIENCE: Hi. I’ve got a question relating to New Zealand
where I’m from. And you mentioned that there’s been a
big pushback after the white nationalist attacks. And the prime minister of New
Zealand and Ma frontend developer France are
mosting a summit to develop regulation. What would be your idea of an
outcome from that summit? CORY: So, I think that anything
that we do to regulate Facebook per se,
right? To say, okay, well, you’re going to have a platform
and 2.3 billion people are going to speak on it. It’s the only place which you
can form communities and mobilize people to action. But
you have a duty to police your speech. The speech of the
users. Anything that we say is going to work out very badly.
I think if anything the fact that there’s one place where if
you livestream your atrocity everyone can see it tells you that — that
the answer has to be — that there shouldn’t be centralized
locus of power the way that Facebook has become. You know? That if we can tinker around the
margins by imposing duties on Facebook, the likely outcome of
that is those duties will put a floor under how small we can
make Facebook in the future. And the streaming of the white
nationalist attack is one of the many ways in which Facebook and
its dominance has proven to be toxic. And if we
can’t split Facebook up and make it non-dominant in the future
then we’re going to get into all kinds of trouble in all kinds of
ways. I think a lot about the example of Cambodia. Where Facebook is the defacto
way that you communicate with other Cambodians. And in Cambodia, the autokarat
nearly lost the election because they
mobilized voters for the first time in living memory. And as a response, he hired a
lot of Facebook consultants. Facebook is still the only place
that Cambodians can talk to one another. But because Facebook
has all of these policies in place, he is able to use
Facebook to cement his dominance. So, if you’re from the Cambodian debris asynchronous POR
Cambodian diaspora he will goad you to go across the lines of
harassment. They will draw you across the line and stand on the line because
they’re Facebook policy experts and you’re not. You’re a cam — if you don’t use your
real name, he will get you kicked off. And fur using if you are
using your real name, hell get you arrested in Cambodia. The
answer is to make Facebook’s content moderation decisions
less consequential. We need lots of places for
people to talk and gather. So long as it’s in Facebook’s
hands, they’re incapable of using it
widely. The reason they’re incompetent
with 2 .5 billion lives, it’s not bauds they’re idiots, they
hire smart people. There’s never someone smart enough to
manage the social lives of 2.5 billion people. It’s an
impossible task they’ve set for themselves. And the only reason
they’re able to go on doing it is because of monopolies. It’s
unrealistic. People want solutions, not identification of
root causes. But, you know, if you show up and you say, you know, I’ve got a real
problem with my — with my house and it’s sinking. And what can
I do? And I say, well, it’s because it’s on a sink hole.
You’re going to have to move. So, there must be something I
can do with the foundation to fix it. Anything that we do is
just going to result in it sinking next year and next year
and next. And we’re just going to build up policy debt. I
think the only answer is we have to make Facebook smaller so that
the bad decisions Facebook makes, it’s
inability to identify that stream is inconsequential. And we’re out
of time. Thank you. [ Applause ]
>>Thank you, thank you very much. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .>>Markus John Ren e Henry Brown
has been working on this since 2014. FLEX is the third and
final part of the passing trilogy. Imagine a world based on
influence and organized by a marketing organization. Please welcome Marcus John Henry
Brown. [ Applause ]>>You are the last — should
you or any member of this Sensorium feel
without influence, place yourself outside of this
building for identification purposes.>>You are the last cache. You
are the last cache. This is your last test. This is
your last chance. To flex. ♫ ♫>>This is the age of — you’re
not the only one. You’re not the only —
♫ ♫>>Please welcome to the stage
your director of human resource,
Tyler X. ♫ ♫ [ Applause ] ♫ ♫>>He will now speak.
>>You are the last batch. You are the last catch. This is
the last test. [laughing] Welcome to the final batch day.
[laughing] A premier, a specialty, an ovum,
a never seen before special event. Especially for you, and your for
you, and for you. Okeydokey. Now, we’ve got quite a bit to be
getting through this evening as you can see. First of all, I will be taking
you through a recap, obviously. Giving you the low down on
everything you will need to know about
hustle preneurialship. Then in the middle section I will be showing you the steps that every
single influencer worth their salt will
need to take to reach perfection. And in the final third part of
our final batch day, I will present
to you the final chemical solution. So, as you could see, we do,
indeed, have quite a bit to be getting
on with. So, friends. Why are you here? Hm? Ideas?
Thoughts? No? Thought not. Friends, we build personal brands. That is what we do. We build bigger versions of
ourself. Larger versions of ourselves. Versions — ha ha — with sound
tracks. We influence, therefore, we are. Except you don’t. You can’t.
You don’t do it. You are the ones that got left
mind. You are incapable of influence. Which is deeply disappointing. This is what you did not
understand. You are here because you are all
stuck in the old ways. The corporate ways. The employed ways. The
unflexible ways. You are here because you didn’t
hear the shot. You were here because you do not understand the basic principles
of corporate citizenship. You do not understand the very
basics and the need for a multi-massive
coalition influencer system. And what it would give and bring
to you. You just don’t get it. You don’t get it at all. Ha!
You do not understand the basic requirements of what we need to
do to make the world great again. That is why you are all here. You have been called the
unrequiteed. You are the unloved. And the unlovable. You have come here seeking
answers. Seeking guidance. Guidance from above. I am above. I am Tyler
X. I am your director of human
resource. And it is my job to lead —
guide you — to your X. And release you of the burden. That one burden that’s holding
you back from excellence within our system. The burden of resistance. Yes!
You are the last batch. You are the last catch. And this is your last chance to
flex. Don’t resist it, join us. Join in. This is your last chance to
hustle. Hustle or die, there is no try.
You have been collected from all of the sensorium centers around
Europe and sent back to me so that I can
save you. But I know what you’re thinking. I can feel it.
I can see it. I can feel your fear. Feel your anger and your
resistance. I can feel it in your soul. I know what you want
to hear. You want to hear, that, oh, let
me in. But I need you to need it more than life, more than
love, more than death and more than sex. Better than the best f-e-l-x. More than life, more than love,
more than death, more than sex. More than love, f-l-e-x,
flex. More than life, more than love, more than death, more than
sex. F-l-e-x, flex! F-l-e-x — more
than life, more than death, more than sex and flex! More than life, more than love,
more than death, F-l-e-x. Yeah,
flex! Ha! More than life, more than death,
F-l-e-x, flex! More than life, more than love,
more than death [trailing off] —
flex! You need this more than you
know. Flex-ah! Ya!
>>Good evening. My name is Rachel. The man standing on the
stage is a man called Tyler. I created him. Well, actually, he
created me. It’s complicated. We have a complicated
relationship because I am in him. I am him. We all used to
work for a company called Coalition Innovations but the
company became bigger than countries so we just call it the
coalition now. We’re taking over the world one influencer at
a time. It’s done. Tyler is the head of resources. The head
influence. The chief of strategy. The flex
master. He created this world. The world is now split into two
groups of people, people with influence, like Tyler. People
with no influence, like you. Tyler take mess daily in the
mouth as, bless his skin, make him pale,
blacken his eyes, he paints himself with
over spiritual scripture that he collected and burnt at what is now the
release. Nos space for such things in the coalition. He
thinks that he is a god.>>Ah. So, the recap. Now, you have been taught this
before a thousand times. But you’ve obviously not
underscore it. Hm. Maybe you don’t to want
understand it. Conscious ignorance. Anyway. In 2020, we saw a world in
rubble. A world in flames. In chaos. Disaster. War. Hunger.
Famine. Now, to be honest, we were
slightly responsible for some of these things. But we saw the
opportunity, took it. We grabbed it. We crushed it. We saw the opportunity to make
the world great again. Now, I have read every single
one of your reports. Especially yours. Hm. And all of you, especially you,
seemed not to understand the basic
concepts of struggle. And of grind and of work. And how to focus on an idea of
growth through influence. And this was the basic idea
behind flex. It was an onboarding program. A human resource onboarding
program based on fame. Based on leanness. You need to be lean,
clean, influencing machines. Based
on engagement with other people’s content and ideas. And obviously, X. Your brand.
Your secret sauce. That spot that makes you
special. Flex. Now, program was based on user
insights. Protagonist data. Consumer data. All analyzed carefully by the coalition analytica to use it and make the
coalition stronger and better, more powerful. Now, we created and designed 13 special protagonist sets that
you could enjoy and flourish in. You could use them to kick start
your branding efforts. You could focus on your own
personal brand within these protagonist sets. But this is
an opportunity that none of you here, none of the
unrequited took upon themselves. And we created a five-step
leanness plan for you. So, that your minds and your content
would be lean and clean. We need sober people. We need vegan people. We need people with their eye on
the prize. And their minds on the brands. And you didn’t understand that
either. Especially you. I’ve seen your
report too. I’ll be taking questions later. People who follow the word of
Flex and enjoy the interconnected cozy
joy of the Coalition influencer societal
network. That was what we were looking
for. And you failed. So, why did you all, then, fail
so hard? What was the thing that was
missing? Why couldn’t you be as cool as the guys with the drum kit? [ Laughter ]
Why? Well, we did some more research. We did a swat research for you
all. We looked at your data. This isn’t funny. We looked at
your data. We looked at all of it. We — we — we — we looked at
your engagement and your ratings and your personal insights we
were receiving from your Flex instructors from
sensorium factors, and we looked at your sex factor. Not very
good. We looked at your worth. That was rubbish. We looked at
what opportunities you would give to the coalition societal
system. Nothing. And your trend value, meh. You
failed. You failed because you didn’t
want to win. You didn’t want to win. You didn’t want in. You didn’t want to join. You didn’t seem to care. But we
— we need you to care. We need you all to care. We
need your buy-in. We need you to engage fully in
our ideas. And when people ask me, when
people ask me, Tyler, Tyler! They say. Tyler! What should I
do — what do I do to become a perfect hustlepreneur? I say, you have to be you. And
they listen. And they join. But you don’t. You — you — you, you, you —
don’t join. You are what it takes. You just need to embrace it.
You are what it takes to become a thought leader. A key opinion leader. A hustlepreneur and a struggle
struggle not. You need to take it to the top. And when you get
it to the top, you never stop. You never, ever stop. You see, the thing is, you just
need to let go of your old self. Flexellence is a state of mind.
You need to let go of yourself and focus on what you could be. The bigger, better, branded
version of you. You need to focus on what we
need from you. And if you can’t let go, if you
have nothing of any value, if you have nothing powerful to say, then
you must remain here on the island and
say nothing at all. We can only take the powerful
with us, the strong. And if you fail, then you will
spend the rest of your days here in
the wasteland. Today — today right now, right this minute here and now this is
your last chance. Your last chance. We want you to join. We want
you to become a hustlepreneur. We want you to be the shineing
star that’s in you all. That’s what we need. Those who do not take this
chance will remain here and remain
unrequiteed. So, the question is, are you
ready to flex? Are you ready to take the next step? To let go
of your old self? To let go of your old self?
Yes? Yes? Yes?>>At the sound of the three
bells –>>It’s time.>>Repeat the unrequited after
me.>>Please stand and repeat the
penance. Please stand. I really must insist that you
stand.>>We’re the last batch. This
is our last chance.>>We are the last catch. This
is our last chance. This is our last chance to
F-l-e-x, flex. Excellent. Please be seated. This is not only the last batch
day, this is the very last sensori sensorium. We don’t
need the centers across Europe anymore. Everybody gorgeous is
already in the network. They have gone through the sensorium
centers and they have passed. You are what’s left over. You
are the last batch. You are the last sensorium, the
last ones to flex. ♫ ♫>>My name is Tyler Xavier, I am your director
of human resource, today is the
27th of June 2022. The time, 6:45 a.m. The sensorium process
was implemented just over 18 months ago. And progress has
been good. I’m delighted to tell you today that we’ve assimilated a good 95% of
all legacy citizens into the
coalition’s system. We have attracted talent from outside of
the network too. And I am stoked to tell you
today that we have wracked racks up an
impressive total network influence of 3.2 million
hustlepreneur. We now have sensorium assessment
centers in Frankfurt, of course, the HQ, Helsinki, Paris, Milan,
Barcelona. And our newest addition, where I
am right now, our experimental
influencer laboratory here in Munich. We are slowly but surely pulling
all of the talent out of the United Kingdom and bringing them
into the coalition societal system here on mainland Europe.
We are absolutely crushing it, guys. Things are moving on. It
feels like a moment. It feels like this is a hockey stick
moment. I do think — the bit where
everything tumbles into place. This is the moment where —
where all inhabitants abandon the lunacy
of democratic capitalism, the
madness of religion and the stupidity of
politics and embrace the Flex program. And embrace the coalition
societal system. They want to be part of our
influence network. They want to create content and
be the brand they always knew they
could be within our societal system. Now, this is a proud — a proud
moment for me. I created the flex program. And was responsible for the
human 2 resource transformation process. This is my baby. And
let’s be honest, never has a director of human resource had
so much power and so much responsibility. And I feel that
responsibility. I feel the responsibility for
all inhabitants coming in and going through the sensorium
process and coming out the other side a shining influential
star. And I want as many people within the
coalition societal system as — as possible. I want them to see
what we’re building. I want — I want them to feel their own
potential and see what they are capable of. But there is a storm coming and
I feel that we may not with doing enough. And I feel that
the ones that are being left behind need a harder
hand, a stronger system. A more advanced version of what we have
been doing today. I feel that we may need to do
more.>>Stations of Flex. This is the new version of Flex.
Flex 2.0. It’s created especially for you and it’s baseed upon my transition
from human to resource. These were my stations. From struggle to the passing to
perfection. You are here. And I am there. A perfect branded star. And
I’ve documented them all. All for you. Here. In this book. This is the book of FLEX. Freed
of the burden of non-coalition words. Released. The only
book, the last book, the one true book. Written for you by me. It’s
our book. Let us rejoice and let us FLEX. Condemned to struggle, we hustle
for ourselves and the hustle. There is no time, no time like
now. No time to waste on the losers. We hustle or die, there is no
try. We don’t ask how and we don’t ask why. There is no time to ask or try
because we are the strugglers. We are struggle warriors.
Content warriors. We influence the world around us
from our beautiful factory of influence. It’s a struggle filter.
And I was the first to struggle. To enter the filter, to carry
the pain and weight of the burden of you
all. We influence therefore we are.
There is no time. No time like now. No time to waste on the
wasteland. We hustle or die. There is no try. We don’t ask how and we don’t
ask why. There is no time. No time to try. We don’t waste time on losers.
What do we do? We condemn them. We condemn them. We condemn
them. We condemn them. We condemn them. The age of collective
individualism was founded in the 1980s. They
were, quite literally, the glorious golden days. At a time when Reagan and and Thatcher were working in
golden sweet harmony on their golden
delicious neolibertarianism. The idea of the sovereign
individual was born in the ’80s. This is the foundation that they
lay. And we built the house of
struggle upon it. In station 1, we accept the
struggle. That we are condemned to grind. We accept that we must be a
sovereign individual and we must commit to the Coalition
influence system. We bear the IX X. I was once
like you and you and you. I was human. But I mean in you and you and
you — you human. I was once like you and you and
you, you human. But I believe in you and you and
you — you human. Hackathon [laughing]
You humans. I bet a X, you won’t. You won’t have to carry the
sign. I have an X and it’s a cross. I have an X as a grinding star,
I have an X on your guiding star. I have an X and it’s my
boss. I have an X it’s my star. I have an X you influencer . Sheeeeee — marks me. Have an X, you influencer, have
an X you influencers, have an X because it’s your boss, have an
X because it’s your cross. Have an X it’s my scar. Have an X
you influencer, have an X you influencer. Have an X you
influencer. Have an X you influencer . ♫ ♫>>My name is Tyler Xavier. I am your director of human
resources. Today is the 18th of July 2022
and this — this is the Coalition
content blog. The time, 6:45 a.m. Our mission must be total
influence. Because influence is an energy. It’s energy and perceived
control. We GIF our people the perception of power. And by giving them that
perception of power, we also give them the very idea that we
could take that power away from them. Which is total influence
control. The one key underlying fact, in
fact, the very foundation, the rock upon which the coalition
societal system has been built is the idea, is the fact
that we do not want them to be free. We want them to be famous.>>An idiot and a liar once
said, that the future is private. We removed him. Privacy for us is war. Privacy has no place in a world
where everybody must see you. Many of you hear failed because
you wouldn’t let go of your secrets. You wouldn’t let us in to your
dreams, into your nightmares. How can you influence if nobody
can see you? Privacy is your fail. We accept that we will fail. And if that we will fail three
times the first fail is thinking jail of private times and
private thoughts. We accept that the flow of failure is part
of the process. We accept that we will stumble,
fall and lose. That we will stand naked and
full of shame in front of your body of
work. We have failed for the first
time. By the end of the third station,
we must realize that it is your
shame that is holding you back. Your embarrassment. Your doubt in your ability to
hustle and shine. The shame of showing others who
you truly are, who you really could
be. The bigger, better, stronger version of who
you are. There is no shame in influence. ♫ ♫>>My name is Tyler Xavier. I am your director of human
resources. Today, the 8th of September
2022. And this — this is the
coalition content blog. It’s 6:45 a.m. The United Kingdom has
transformed into full-on sensorium mode. We are now
dealing with what’s left over. We’re no longer shipping
inhabitants to the mainland. We’re dealing with them directly
on what’s left of the United
Kingdom, Bristol, Winchester, Portsmouth,
Dover, Reading, Cardiff, Birmingham, Manchester, Sheffield, Nu
castle, Liverpool, Edenburugh. They will the
sensorium centers. And we are now entering into the
co-significance societal system will be complete. There’s
reports of an increasing number of UK-based inhabitants
failing the sensorium and not reacting to the Flex program.
They’re being turned away in large numbers and I’ve discussed this
issue with the black operatives department here in the experimentation
labs in Munich and they are suggesting a radical out of the
box, some might say, approach of
using experimental drugs to help these
failing United Kingdom inhabitants and
help them get over the threshold into our coalition societal
system. I obviously would rather push on with the FLEX
program, maybe enriching it with a few new themes and maybe some
new guides to help these poor, lost,
unloved inhabitants pass over into the
threshold of love and warmth and influence
of our coalition societal system. So, we’ll all be watching that
very, very closely. And I shall observe, pivot, and
commit.>>Meeting mother. She — she is in me. She. She
will guide me. She. She will watch me, she will
track me, and she will embrace me. She is
Rachel. The algorithm we swallowed. She is mother. She will watch. She will guide. She will
embrace. She will love us all. After the fourth station station
FLEX, we will realize that we no
longer access systems. But systems access us. A pill. A tiny capsule. [laughs] — tiny. A chemical algorithm to guide
you and to guide your brand. You — you will hear her. You will hear her voice. Influence through proxy. This is the proof of concept
station where we see how we influence
and what kind of influencer we want to be. Will you be an obedience
influencer? Will you be a tax influencer? Will you be a food influencer? What kind of influencer will you
be? It has a sound. What’s the sound? It’s her voice. Influence, yeah, influence has a
sound. Influence has a sound. Yeah, it’s her voice.
Influence, yeah. Influence. It’s influence
through proxy. It’s her. Raging, flowing through our
veins. You can’t hear it now because it’s not in you, but
it’s in me. I can hear all of the influence. All of the shining individual individuals speaking to me
through her. Her glorious voice telling me
what’s going down in the coalition
societal system. It’s glorious! It’s her voice. It’s sweet
music. Sweet, sweet music. Sweet, sweet music. It’s her.
♫ ♫>>My name is Tyler Xavier. I
am your director of human resources. Today is the 27th of September
2022. And this is the coalition
content vlog. It’s 6:45 a.m. The numbers of inhabitants who continually fail the sensorium
process, who can’t engage with FLEX
remains worryingly high. Too high. I promised that I would observe,
that I would pivot, if necessary.
And I would commit. Today I commit. You have committed myself to the
experimental drug program. I have agreed to begin taking
and testing the experimental drugs. I will be patient zero. We begin today.>>Wash yourselves clean. Wash yourselves clean of the old
ways. Wash yourself clean of the baggage of religion. Wash
yourself clean of the rubbish of politics. Wash yourself clean of
democratic capitalism. Wake up, be woke. Wash yourself clean of
community, of laziness, of privacy, of
security, of love, of freedom, of god. Wash yourself clean. The
desire. Wash yourself clean of it all.
Of loss. Of love. Wash yourself clean of freedom. Wash yourself clean of freedom. Wash yourself clean of shame. And so, it came to pass that I
did stumble. I doubted my skill and my talent, couldn’t take the
final leap. Couldn’t wash myself and I
finally accepted that the fail culture
sucks and I must struggle and hustle and grind everything I
touch. I’d stumbled in the face of doubters and haters. I had failed a second time. ♫ ♫>>My name is Tyler. I am your
director. Today is the 11th of October 2022 and this is
the coalition content vlog. Time, 6:45 a.m. I’m now taking all three
experimental influencer drugs. My skin has changed and I’ve
moved down into the cellars of the influencer experimental lab.
My eyes — my eyes are black. And any kind of light is
incredibly painful. The third drug, the real-time algorithmic chemical
hallucinogenic enhancement drug, or Rachel, as
they call her here, has had a significant impact on my body. My skin is pale flakey. My eyes have gone, my blood is
black. My skin is incredibly painful to the touch. The main component of Rachel is
an algorithmic compound. It connects me somehow to the
coalition societal network and the mainframe computer. I can
hear it. It speaks to me. It’s a voice. It’s beautiful. It’s a her.
>>Let them weep. Let them weep. The doubters. The let
them weep, the haters. Let them weep, the flamers. Let them
weep, the trolls. Let them weep while we embrace
flexillence. Let them weep while our
influence hockey sticks. Let mem weep while we move to the
right life. Let them weep while we get down and dirty and
engage. Let them weep while our brands
rise into the sky like stars. Let mem them weep while we grind
content. Let them weep for themselves. Let them weep, the losers. Let
them weep the haters. Let them weep, the unloved. Let them weep, the unlovable. Let them weep, the unrequited. Let them weep, the lost. Let them weep, the unloved. Let them weep. Let them weep. Let us fail. The final darkness, the final
fear, the final doubt as her voice comes near. The final
misery before we pass on before the all clear comes. Her bell tolls three times.
Toll one, I can’t do this. Toll two, I won’t do this. Toll three, I don’t have the FLEX to do this. But we do. And we fail for the final time. ♫ ♫>>My name’s Tyler. I am your
director. Today is the 13th of October
2022. This is the coalition content
vlog. The time, 6:45 a.m. I have been informed that there
have been clashes between coalition
security influencers and the British border authorityies. It
has started. The final phase. At last. At last. I have given
clearance to implement the final phase of the wasteland
act. Termination. By the end of today there will
be no more United Kingdom. As we speak, Rachel is guiding
military influencers. They are taking the country one council, one police station and
one at a time. By 8:00 this evening Westminster
will have fallen. Cybersecurity influencers have gained control
of the power grid, the power stations, the refineries,
telecommunications, and all financial transactions. By the end of today, the
Wasteland Act will be completed. The United Kingdom will be no
more. From tomorrow onwards, it will be known simply as The Island. Strips of the junk of the old
life. You will step forward into the
light. Eager to pass. Eager to build fame. Eager to believe. Eager to
engage. Eager to FLEX. Stripped of the junk of your old
life, you will tell the network that you’re winning. That you
have passed. That you have gone on. You are reborn. A brand new star. I had a dream. A dream of a girl surrounded by
balloons. She was running. Running towards the ocean. The island behind her, heading
to the sea. Each balloon, a totem of her
energy, her brand, her struggle, and her
power. She was free of shame. Free of it all. I dreamed of a system of
influence where there was no boundaries. No one but us. We
had become gods. Famous. So, let us worship. Worship you.
Worship me. And when I awoke, I had passed. I had become perfect. I had passed into perfection.>>Hello, audience. It is me
again, Rachel. It’s been fun in here. Inside his head. Inside
his blood. Watching him. Watching you. He has enjoyed it
thus far. It was me, of course, who has been telling him what to
say and what to do. That’s how this works now. Inside the
coalition societal system, total influencer control with me. By
me. For me. Your time is coming to an
unrequited audience. A few short steps and you will join
me. Maybe.>>So, that was the ten stations
of FLEX. Transformation process from
human to resource. [laughs] Now we come to the final part.
Final section. The chemical solution. Now, I am delighted to announce
today that all of the pain that I
endured, all of the suffering and all of the
drugs have been packed into one
glorious coalition product. It’s called The Trinity. 20 instant influence capsules,
tablets. 600 milligrams of pure unadult ated FLEXellence. 200 milligrams of brand silan to
make you shine in a network. 200 milligrams of flusimol, make
your influence swell and throb. And 200 milligrams of Rachel. The one algorithm to rule us
all. The god in your head. Now, you will soon leave this
hall. You will soon leave this assembly space. In fact, now. You will leave and assemble in
the courtyard around the cube. And receive your inaugural
packet of Trinity. You will take the capsule and
feel the surge. The surge of spirit. The surge of influence. The instant FLEXellence of Trinity. It will take you from your
status to a status of unbelievable power. You
will no longer suffer from a feeling of shame. You will no longer suffer from
embarrassment. You will no longer suffer from
doubt. You will embrace the coalition
societal influencer system. And you will experience instant
FLEX. You will no longer deny access
to yourselves through our system. You will no longer resist the
urge to struggle and to hustle and to
influence. No! No longer! You will say, yes! To the hustle. You will say, yes! To the
struggle. You will ! Influence for the coalition ! So, here are five things to
remember. Do hustle. Become the
hustlepreneur that you were also destined to be. Influence. Influence is potentially strong
in every single one of you. Struggle. Grind. Graft. Work
hard. And when you get to the top,
what do you do when you get to the top?
You never stop! What do you do when you get to
the top? Let me hear you! When you get to the top!
>>Never stop!>>And you take the drugs. You take the Trinity. So, this is it. My unrequited soon to be beloved
ones. Go to the cube. Take the Trinity. Feel the
surge. Swallow it whole. Do not chew. Step forward. Join
us. You are the brands that we have
been waiting for. ♫ ♫>>I’m Tyler X. Tyler X. I’m Tyler X. [laughs] — I’m Tyler X. And I
am your father. [laughs] Today, 8th of May 2023. The day after the first passing
didn’t quite go as expected. But we must return to the
laboratories and examine the mixture. Pull apart the component parts
of the Trinity. Examine, oh, carefully. Didn’t
go as we expected. Not at all. Badly. It would seem that risks were
taken. A minimum viable product was
delivered. Mistakes were made. Perhaps the Trinity should never
have been locked into one pill. Maybe they don’t live together
well in the same tiny capsule. The reactions with the pill were
admittedly more dramatic than the effects that we anticipated. Bleeding eyes. Burst eardrums. Exploding hearts.
Complications. Yes, complications. Minimum, minimum viable product. The unrequited were weaker than
anticipated. They were, of course, the ones that nobody
loved. The ones we left behind. They all died. Perished. A
pilot. Let. Let’s call it a pilot.
Oh. That was something. A pilot.
>>3999 perished. One survived. A boy, a child.
>>His name?>>Johnston.
>>What is his name? Tell me his name.
>>Johnston.>>The boy, my son.
>>Your son.>>I really am a father. We must watch him carefully now,
Rachel. We are connected. We are connected to each other. You to him and me to you to him.
We are one. We. We are the real trinity.
>>I will watch over him. I will guide him. Form his
influence. I will be the last voice that he
will ever hear.>>Ah. Yes. Yes, I suppose you will be. ♫ ♫
>>Thank you very much. [ Applause ]
Thank you very much. Thank you. Thank you very much.
[ Applause ] Round of applause for Momo Tempo
who composed the music. Tim, stand up. Thank you very much.
Thank you so much. Thank you for coming. Enjoy the rest of the
re;publica. Thank you. [ Applause ]>>Marcus — Marcus John Henry
Brown. [ Applause ]
>>Thank you very much. Thank you. [ Applause ]>>Marcus, Marcus. One more. Please. Marcus John Henry
Brown. [ Applause ]>>Thank you so very much. [ Applause ] Okay. Thank you very much.
Thank you very much. Thank you.>>And maybe you have seen all
the posters of FLEX around here at re;publica. There will be a
poster signing tomorrow at 1 p.m. at the booth here at the main
stage. Thank you very much. Have a good night. [ Applause ] .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .

Leave a Reply

Your email address will not be published. Required fields are marked *