#
The very fact that you're listening to my voice right now is a bloody miracle.
#
When I was a kid growing up in pre-liberalization India, I would never have thought that I can
#
speak into a mic in my room to someone thousands of miles away and be heard the next day by
#
hundreds of thousands of people.
#
What manner of wizardry is this?
#
Almost every bit of technology that I use today would have seemed like magic to me in
#
Hell, I remember in 1994 when I graduated from college and moved to Delhi to work in
#
what was then India's largest advertising agency, there were just four computers in
#
the entire floor where I worked, in a special computer room.
#
And I remember marveling at the software I used once to type a script.
#
Have you heard of WordStar?
#
I remember moving to Bombay shortly after that, and yes, it was Bombay at that time
#
and not Mumbai, and dreaming about how I would one day be rich and famous and have a laptop.
#
It was so aspirational back in that time and so out of reach.
#
And even the primitive garbage that was available then was way more expensive than laptops are
#
I remember a friend telling me about how he had something called the internet in his office
#
computer and you could type in anyone's name and it would show every web page in the world
#
I went to his office once, was allowed to see his computer and typed in the words Van
#
Morrison and my God, what a magical feeling it was to see the resultant web page when
#
it did appear after about five minutes.
#
Because hey, remember dial-up connections, young Indian men of my age will remember staring
#
at the forehead of Samantha Fox as their dial-up connection tried to reconnect.
#
You remember the sound that would make?
#
But why am I talking about this now?
#
It's because we normalize every technological advance as soon as it happens and we have
#
lost our sense of wonder.
#
And so this is a good time to reconsider how far we have come in these years and how quickly
#
our world is changing around us.
#
Artificial intelligence and machine learning have already changed the way we live our lives
#
and will continue to transform our post-COVID world.
#
In fact, those of us who live to say 2040 will look back on 2020 and think, wow, we
#
lived such primitive lives, we were like cave dwellers.
#
I keep saying that the future is full of unknown unknowns, so we can't imagine what it will
#
look like, but I'm often struck by curiosity.
#
What kind of a world will my future self inhabit?
#
Welcome to The Scene and the Unseen, our weekly podcast on economics, politics and behavioral
#
Please welcome your host, Amit Verma.
#
Welcome to The Scene and the Unseen.
#
My guest today is Vasanthar, an artificial intelligence researcher who is a professor
#
at the Stern School of Business and the Center for Data Science at New York University.
#
But Vasanth is no more academic.
#
He worked with Morgan Stanley in the 1990s, flirted with sports analytics and even set
#
up a hedge fund called SCT Capital, which used machine learning to make its trades in
#
His understanding of AI is second to none.
#
He's been both a practitioner and a researcher.
#
In other words, he's had skin in the game.
#
And indeed, it could be argued that we all have skin in the game now because all of our
#
lives will be transformed by technology.
#
And it is to explore exactly this that Vasanth has started an interview podcast called Brave
#
Vasanth is a host of this new show and will be having conversations with a variety of
#
fascinating guests such as Scott Galloway, Richard Thaler, Eric Topol and Srinan Aral.
#
It's an initiative of the Data Governance Network and I'm actually producing the show
#
This time, I'm working on a show where I am not the host.
#
So I can give you the inside scoop on Brave New World.
#
You are going to hear some incredible conversations on it.
#
So do subscribe to it on your podcast app of choice and you can also visit its website
#
at brave new podcast.com.
#
And now for our conversation.
#
But first, let's take a quick commercial break.
#
One of the things I have worked on in recent years is getting my reading habit together.
#
This involves making time to read books, but it also means reading long form articles and
#
There's a world of knowledge available through the internet.
#
But the big question we all face is how do I navigate this knowledge?
#
Who will be my guide to all the awesome writing out there?
#
Well, a couple of friends of mine run this awesome company called CTQ compounds at CTQ
#
compounds.com, which aims to help people upgrade themselves constantly to stay relevant for
#
A few months ago, I signed up for one of their programs called The Daily Reader.
#
Every day for six months, they sent me a long form article to read.
#
The subjects covered went from machine learning to mythology to mental models to even marmalade.
#
This helped me build a habit of reading.
#
At the end of every day, I understood the world a little better than I had the previous
#
Many listeners of The Scene in the Unseen often ask me, hey, how can I build my reading
#
habit and upgrade my brain?
#
I have an answer for you.
#
Head on over to CTQ compounds and check out The Daily Reader as well as the other activities
#
that will help your future self be an upgrade on your present self.
#
CTQ compounds at CTQ compounds.com.
#
You owe this to yourself.
#
Vasanth, welcome to The Scene in the Unseen.
#
I'm delighted to be here.
#
Thank you for having me on your show.
#
It's a great pleasure for me, partly because I've also been, as I announced in the introduction,
#
helping you with your show Brave New World.
#
I'm so excited to listen to the future episodes of that because it's just all such a fascinating
#
Before we go into the future, I want to go back into the past a little bit.
#
I want to talk about your childhood.
#
What was a young Vasanth like?
#
In one of your talks on YouTube, you began with this beautiful picture of a bridge in
#
Kashmir and you wrote about how that's a bridge you went across when you went to school.
#
Tell me about the young Vasanth and your years growing up and what did you want to be when
#
I have no idea, but there's a Jerry Garcia line, what a long, strange trip it's been.
#
That characterizes my life more or less.
#
When I just think of the changes I've seen in my lifetime, that bridge you saw in my
#
TEDx talk was actually a bridge in Safa Qadhal in Kashmir and I used to walk across that
#
bridge and when that bridge was under construction, I used to take the boat, I was like four years
#
old with someone who used to take me to school, who's still alive, by the way, and I meet
#
him in Kashmir every year.
#
We reminisce about those old times, those early 60s.
#
That's where I got started.
#
My father was in the army, which is why I was in Kashmir a fair amount because he was
#
posted to hill stations or rather posted to forward stations, they called them, where
#
families were not allowed.
#
My mom and I spent, whenever he was away, we spent some time in Kashmir and so that's
#
where I started my childhood.
#
It was all over the place because he was in the army.
#
I was in Wellington, which is my earliest memories in life, where he was at the Defense Services
#
That was actually my earliest memory in life, was in Wellington in South India, but we were
#
always moving around and then he got posted as a military attache to Ethiopia and so we
#
sailed from Bombay to Yemen and then to Ethiopia and that was a real mind-boggling experience
#
I was not even nine, I spent three years there, it was really formative years.
#
Haile Selassie was the emperor and I used to get invited to his New Year's parties
#
for diplomats' kids, so three years I went to meet Haile Selassie and I shook hands with
#
One of those years, another year he did a namaste, one year he shook hands and many,
#
many years later when I went to Jamaica, that really endeared me to the Rastafarians because
#
I don't know if you know this, but they follow Haile Selassie and his real name was Rastafari,
#
which is where Rastafarian comes from, which most people are not aware of.
#
Yeah, so it was that and then boarding school in India after that and by the way, when I
#
was in Ethiopia, my mom put me in the wrong grade.
#
She put me into seventh grade instead of fourth grade.
#
I'd gone from third grade in India and she said, oh, you're going to standard four and
#
the kids there, we'd gone to a party one day to the ambassador's house and they told me,
#
oh, standard four here means seventh grade.
#
Tell your mom that in the British schools, you want to be in class too.
#
So on the way back home, I told my mom, I said, by the way, it's not standard four,
#
that's seventh grade and she just sort of in Kashmiri said, you know, which means shut
#
up, you little pipsqueak, you think you know everything.
#
And the next day I went and took the test for seventh grade and I did really poorly.
#
The headmaster called my dad, you know, he was a very understated British guy and said,
#
you know, you show, you know what you're doing, you know, and what he meant was, you know,
#
your son's like nine years old, why are you putting him in seventh grade?
#
And my dad completely missed it, right, because, you know, in his, the headmaster in his typically
#
understated British way said, you know, I think you're being a little ambitious actually.
#
And my dad just missed that and so the first day I walked into class, there were people
#
like 15, you know, and, you know, whistling at an attractive 19 year old English teacher
#
and I told my parents about this and instead of being worried, they were amused, they didn't
#
actually realize it until, you know, I went to the airport with my dad to collect my brother
#
who was coming home from boarding school and there was this woman who walks by in short
#
skirts and heels and I walked up to her and I said, hey, Fatima, how are you doing?
#
And she says, hey, Basant, and we chatted and my dad said, who is that woman?
#
And I said, well, she's in my class and he says, what?
#
I said, well, what do you think I've been telling you all this time?
#
So you know, those incidents have not been atypical.
#
So that was my early childhood.
#
Then I went to boarding school in India and then IIT and, you know, and then graduate
#
school after that in Pittsburgh.
#
So you know, which is where I got into AI.
#
So that in a nutshell has been sort of my journey.
#
So not a typical childhood, very, very unusual.
#
And kind of staying with your childhood, because now I'm curious that did your years spent
#
outside the country, you know, hobnobbing with the likes of Haile Selassie, did it mark
#
you out from the other kids when you came back, like you went to boarding school at
#
Lawrence School Sanaver and all that, fairly elite place at the time, but do you feel that
#
you were a slightly different person, one, because, you know, that you had this experience
#
with the world, which was beyond them, because those were not the kind of connected times
#
that they are where you have easy experience to other cultures and all of that.
#
Did that kind of make you feel that there was one layer to you which nobody else had
#
Yes, I think there was, I'm not sure, quite sure how to describe it, right?
#
It's hard to describe these experiences that I had, both in terms of, you know, just like
#
a completely different culture.
#
And I did, you know, when you're young, you sort of immerse yourself in it so much easier.
#
And so, you know, when my parents went off to, you know, diplomat parties, I'd be hanging
#
out with the houseboy and the maid and we'd talk in Amharic, right?
#
So I learned Amharic, the language, and then in school I was learning French.
#
Yeah, but it was just a very unique experience.
#
So the answer is, yeah, it probably did change me in some ways.
#
And then, of course, boarding school was an entirely different trip already, you know,
#
to come back in India, you know, to go from 3rd grade to 7th grade to 8th grade, back
#
to 6th grade in boarding school, and finally be kind of, you know, somewhat, you know,
#
have some sort of stability for the last six or seven years of my school.
#
Before that, you know, I was in a new school every year because my father would get posted
#
to, you know, Ambala or something and I'd, you know, go to school there.
#
So it was just, you know, constant change in my childhood, a lot of change, but very
#
interesting because it just gave me this, also a wanderlust, I guess, because, you know,
#
when I was in IIT, you know, me and a friend hitchhiked from Delhi to London, you know,
#
And now, you know, last year I just met someone at the London Business School, a faculty member
#
who had actually done it exactly the same time the other direction, you know, London
#
But that was a time there was a lot of movement, a lot of hippies were going back and forth
#
So it was a great time, but it gave me sort of a sense of, you know, wanting to explore,
#
which I've continued to do since then.
#
I just love to travel and be in other cultures.
#
And I noticed when you went to IIT Bombay, you know, you passed out from IIT in 78 and
#
you did a BTEC in chemical engineering, which doesn't seem too connected with everything
#
So, you know, were you just sort of following the standard routes that one does school and
#
then OK, IIT is the thing to do, so one goes for engineering and all of that?
#
Or did you have specific ideas of your own at that point in time about the things that
#
And was there anything in your interests or your sort of hobbies at that time that presaged
#
the future direction of your life as it were?
#
I was just trying to create good alternatives for myself, you know, and my parents being,
#
you know, typical Indian parents of that era said, you know, do something where you can
#
get a job, you know, that you can support yourself and you're good at math and stuff.
#
So, you know, engineering seems like the right thing.
#
So, you know, so I took, you know, Sangal's classes in South Extension.
#
So I went to IIT Delhi, by the way, you know, I took his classes and, you know, and we were
#
just so much behind the kids from the Delhi schools, you know, like Columbus and Xavier's,
#
you know, we guys had gone to boarding school, we played lots of sports, you know, and my
#
very first day in class in Sangal, you know, I was just completely lost, you know, he'd
#
say, you know, achha sign 4A kya hai, you know, and like five guys would like raise their
#
hands and say, you know, whatever, I'd be like trying to figure out like how do I break
#
this, you know, sign 4A and solve it.
#
So yeah, no, it was nothing, it was just go for the, you know, alternatives in front of
#
you and, you know, I took the IIT exam, which, as you know, is brutal and grueling, you know,
#
stumbled my way into IIT, you know, and then applied to graduate school and then Pittsburgh
#
is really sort of where I found myself in the graduate program.
#
That's when I learned about artificial intelligence and what it was, and I'll never forget my
#
very first exposure to AI, you know, that will remain with me, you know, I just joined
#
a PhD program there and a bunch of students asked me, they said, hey, you know, we want
#
to ask this professor to offer a course in artificial intelligence, but we just want
#
to go in force so he knows there's enough of us.
#
So I just, you know, I said, what is AI?
#
And they said, well, it's when machines get intelligent, and I was like, okay, whatever.
#
And we went up to this lab in the medical school, 13th floor, top floor, and there was
#
this special decision systems lab, and there was this professor, Harry Popol, who had built
#
the first diagnostic system for the entire field of internal medicine.
#
I had no idea what that was.
#
And I had no idea that I'd be spending the next five years of my life in that lab, you
#
know, pretty much full time.
#
But, you know, we went there and asked him.
#
And he said, oh, yeah, I'd be delighted, you know, asked us about our backgrounds.
#
And, you know, and but I remember I was standing while we were waiting for him, he was on the
#
call, there was this gigantic screen in the middle of the room connected to a computer
#
And there was this physician, like, completely white hair puffing a pipe.
#
And he was, quote unquote, talking to the monitor in front of him through his assistant
#
who was typing because people those days couldn't type.
#
And the system was asking him questions about a case, he was trying to solve a case.
#
And so the system asked him a few questions, he gave the answers, it asked him about blood,
#
urine, all that kind of stuff.
#
And it was engaging in a dialogue and then asked him a question like, you know, was there
#
a pain in the left lower abdomen?
#
And he said, puff, puff, why?
#
Like, why you asked me the question?
#
And on the screen, you know, the computer said, well, because the evidence you've given
#
me so far is consistent with the following hypotheses.
#
And this question will help me discriminate between the top two.
#
And I was just looking at this thing like, holy smoke, like, how is a computer doing
#
Like, how the hell is a computer behaving in this way and asking him, you know, the
#
You know, and that was my first exposure.
#
And you know, I think one of the things we'll probably come back to at some point during
#
this talk is where are we now relative to where we were then, right?
#
And I'm talking like 1979, where I was watching this dialogue happen, and the system hardly
#
ever made any mistakes.
#
It would actually diagnose virtually every case correctly, right?
#
And so that was my first exposure to AI.
#
And I said, wow, like, I don't know what the hell this thing is, but this is what I want
#
to do for the rest of my life.
#
You know, so that's what I mean by sort of found what I really wanted to do and a purpose
#
in life there because of him and another gentleman called Herb Simon, who was a Nobel laureate
#
in economics and one of the fathers of AI.
#
So these are my two mentors in grad school, and they gave me a hell of a training and
#
just sort of taught me how to think, which I didn't really know how to do before that.
#
I didn't really know how to think.
#
I didn't really know how to ask a question and answer it, right?
#
I mean, even though I'd gone to IIT and done engineering and math and all that kind of
#
stuff, I just didn't know how to think.
#
You know, I mean, I could solve problems and I was good analytically.
#
So I had some good tools in my back pocket, but just learning how to think about problems
#
and ask the question and just learning what scientific inquiry is really all about and
#
how to conduct it was what they exposed me to.
#
And that's been sort of the rest of my career.
#
So when I started with that Jerry Garcia quote, you can see why it's sort of been a long strange
#
Yeah, you know, and I'm so fascinated and delighted by that story of that moment of
#
magic when you see the computer doing what it does.
#
And it just strikes me that in our own lives, and this will especially be true for young
#
people today, that we are so jaded and we take the tech around us so much for granted
#
that we probably don't have those moments of magic, those aha moments when you realize
#
what the hell like I remember, you know, just speaking of myself, one sort of moment I had
#
like that was actually quite recent when it reminded me also of how jaded had become,
#
which was, you know, I'm a bit of a chess enthusiast and when AlphaZero came out and
#
I remember looking at AlphaZero's games and, you know, and of course, computers in chess,
#
of course, have been around for the longest time, stockfish and all have been used for
#
pedagogy by the top chess players forever.
#
And have shaped the world of chess in very interesting ways.
#
But AlphaZero was just such a step ahead.
#
And you know, I thought of that just now when you spoke about how you started learning how
#
to think from what you had seen and from what AI has done, because, you know, for example,
#
you know, Magnus Carlsen's, the chess world champion, Magnus Carlsen's coach Peter Hein
#
Nelson once said that, you know, I always used to wonder what it would be like to see
#
aliens with a super intelligence play chess and when I saw the AlphaZero games, I knew.
#
And you know, but we'll kind of come back to that later.
#
But I want to probe a little further into what you meant when you said that you learned
#
how to think, you know, can you elaborate a little bit on that?
#
What do you mean by learning how to think?
#
And let me see if I can do it justice, because it's one of those things where you kind of
#
know what it is, but putting it into words, it's a little challenging, but it means several
#
First, it means developing an ability to ask a good question.
#
You know, I mean, I've been teaching for 30 plus years and very often, you know, in class,
#
you know, someone asks a question, I say, wow, that's a great question.
#
And, you know, like what makes for a great question, right?
#
It's a bit of an art there, you know, but you learn to recognize it when you see it.
#
I mean, some questions are just better than others.
#
And, you know, maybe if I have some more time to think about it, I'll, you know, I may be
#
able to come up with actually a model of what makes a good question.
#
But the ability to think is firstly about how to ask the right question.
#
And then once you've asked the right question, a good question, the ability to then say,
#
well, is this answerable, right?
#
Can I actually answer this question?
#
If I had all the resources, I had all the data, can I actually answer this question?
#
And what sorts of answers do I have any expectations about what I should observe?
#
What do we know at the moment about this question?
#
So if I'm asking a question, like, what do we know about it?
#
You know, so just to go off in a bit of a tangent here, right, someone called me yesterday
#
who's interested in applying to the PhD program in data science at NYU, which I had.
#
And so he wanted 20 minutes of my time, so I said, you know, I said, call me.
#
And I said, so what's going on in your head?
#
He says, you know, I'm graduating, I've become fascinated by reinforcement learning because
#
of what I saw in AlphaGo, you know, which used reinforcement learning.
#
So I'm really interested in reinforcement learning because I think there's some real
#
magic to that method and, you know, so that's what I want to explore in my PhD program.
#
And my question to him was, well, why do you think it worked so well in solving AlphaGo
#
as opposed to other methods, right?
#
Do you think other methods could have solved AlphaGo as easily or was it something unique
#
about reinforcement learning?
#
And he says, gee, I don't know, that's an interesting question.
#
And I said, but you know, okay, and what do you think makes reinforcement learning work
#
in game playing programs, right?
#
Do you think it would work across the board, right?
#
So I said to him, I said, you know, I view data science as a canvas where rows are methods
#
and columns are domains.
#
And the interesting questions arise at the intersections of these domains and methods.
#
Reinforcement learning is a method, so look at it as a row.
#
And now game playing is a column, right?
#
Because game playing is a domain, you know, just like finance or healthcare or physics
#
or law or, you know, whatever you call it, those are all domains, right?
#
So I said, so do you think reinforcement learning would work across all of these domains equally
#
well or is there something about game playing that makes it work really well, right?
#
So when you asked me earlier, like, you know, what does it mean to learn how to think?
#
That's what I'm getting at, right?
#
Where are we right now?
#
What's the baseline, right?
#
What do we know about something, right?
#
And why is this a good question?
#
Why will this take us forward significantly, right, which makes it a good question?
#
And then can you actually solve the problem?
#
Do you have the data to solve it?
#
And then do you have the chops, right?
#
And do you have any idea as to how long it's going to take you to solve it, right?
#
So one of the things, you know, I sort of, I'm a pracademic, some people call me, you
#
know, I'm an academic, but I'm also a practical and I, you know, created, you know, automated
#
hedge fund, you know, many years ago.
#
And so when I think of research there, my, my question always is like, how long will
#
You know, will it take a day, two days, a week, a month, or a year?
#
If it's going to take a year, I'm not interested.
#
It's too, it's too uncertain.
#
But if it's going to take a day, I'm very interested.
#
If it's going to take a week, well, I need to think about it, right?
#
So it's all of those things that go into being a good researcher.
#
So from A to Z, asking the question to execution.
#
So I'm just thinking aloud again, zooming into sort of the element of asking a good
#
question, which is a great answer, by the way, would it be one quality of a good question
#
that you minimize the assumptions?
#
Because it strikes me that humans are full of these inherent biases and their assumptions
#
about the way the world works.
#
And sometimes if those are embedded within the question that you ask, that can make the
#
question less effective.
#
Like, again, going back to chess, one of the things that, you know, programs like Stockfish
#
and all would have embedded in them is the earlier heuristics of how you evaluate a chess
#
position where, you know, material has a certain importance, space has a certain importance,
#
Whereas with AlphaZero, one of the things we realized was that chess players across
#
human history have been undervaluing the importance of initiative with regard to material.
#
And that impression would have persisted, because if you ask the questions with the
#
wrong assumptions, then you don't really get those great answers.
#
So again, I'm just kind of thinking aloud that would the good questions, therefore,
#
be willing to take that further step back and, you know, be open to sort of learning
#
that your assumptions earlier were false as well?
#
That's a great way to look at it in terms of, like, how many assumptions are you making
#
in asking the question, right?
#
There's another way to look at it, which is, can I control for a bunch of things, assumptions,
#
for example, can I control for them when I answer the question, right?
#
That's the other way to think about it, right?
#
So, you know, just thinking aloud here, if I were to ask a question like, has India been
#
adversely affected by COVID because of its administrative systems, I'm just making this
#
It's a very broad question, and now, you know, you might say, well, you know, like, what
#
kinds of assumptions will you have to make in answering the question, right?
#
And you're right, if I have to make all kinds of assumptions about the problem, the data,
#
and all that kind of stuff, then by the time I answer the question, the answer may not
#
It may not be relevant.
#
But if I can ask, like, a really nice, tight question, you know, that involves few assumptions
#
and to the extent that it involves assumptions, I can control for them, then that's okay.
#
So, I guess to sort of summarize my answer, I'd say, yeah, in general, you want to make
#
sure that you're not making too many assumptions, especially the wrong ones, and to the extent
#
you are, can you control for them when you answer the question?
#
And, you know, one of the fascinating quotes that I found of yours, so I'm not going to
#
quote you back to yourself, which really made me sit back and think, is when you said, quote,
#
when you have a data-driven approach to life, you find that patterns often emerge before
#
reasons to, stop quote, which I found sort of eye-opening to me in various contexts,
#
not just chess, which is what I immediately thought of, you know, and that's something,
#
as you pointed out, came from your sort of journey examining big data that I think you
#
mentioned started in 1990 when Nielsen approached you to look at some data and you found some
#
unusual patterns which you could not explain at the moment, but they were clearly patterns
#
and they were clearly significant.
#
But you know, before we go there, you know, take me through your journey then through
#
the 80s where you spent five years in this lab and, you know, what kind of work are you
#
What are the kind of problems which now interest you?
#
Because I'm guessing that it is at around this point from the way you seem to describe
#
it that you have this intellectual focus that, you know, these are the kind of problems you
#
So, tell me a little bit about that journey from that point on.
#
So, I'm really glad you asked the question because it actually helps to clarify how AI
#
itself has changed in the last 40 years.
#
So, in the 80s, or rather, I should say until the 90s, AI was largely about reasoning and,
#
you know, systems that could reason from data, make inferences, and I'd say that the language
#
of AI was mostly logic, right?
#
Because with logic, you're on firm ground, right?
#
If A implies B and you're seeing A, well, then B must be true as well, right?
#
So, a lot of AI had sort of hitched itself up to logic, right?
#
And so, logicians were in control of the field at that time.
#
And then people said, you know, logic isn't enough, right?
#
In fact, even my thesis advisor, right, he said, you know, deductive logic doesn't do
#
He said, you know, he picked up on something that a philosopher called Charles Sanders
#
Peirce had come up with called abductive logic, which is that if A implies B and you observe
#
B, it doesn't mean that A is necessarily true, but it could be, right?
#
It's like an induction now, right?
#
So, now you're sort of observing things and invoking hypotheses or reasons for them, right?
#
So, AI, I'd say until the 90s, and this is sort of when I got involved in AI, it was
#
all about logic, it was all about representation, reasoning, inference, right?
#
What happened around the 90s is that there was a sea change because data started becoming
#
And this is, in my own career, that was becoming apparent that, wow, you know, like, we can
#
actually now look towards data.
#
And at that time, it was mostly telephone companies and banks that had the data, you
#
know, other than defense and physics, like, you know, physicists have always had tons
#
of data, but other than them, which was kind of a world in itself, it was mostly telecom
#
companies and banks, and companies like Nielsen, right, who were in the information business.
#
And so, I saw this sort of as a new paradigm for AI that was emerging that would probably
#
eclipse the previous one, and in fact, that's exactly what happened, right?
#
So, in the early 90s, I sort of shifted my focus towards data and learning from data
#
and got this project with AC Nielsen, Household Services Company, and they were tracking 50,000
#
households all over the US.
#
You know, anytime they shopped, they scanned the item and went to a database, and they
#
shared this data with me and said, you know, see if you can find something interesting
#
in this, you know, like, we don't quite know what, but, you know, look at this data.
#
So I did, and, you know, I cranked it through these algorithms I was working on at the time,
#
you know, things called genetic algorithms that would look at data and find, extract
#
So, I went to this meeting, and they said, so what did you find?
#
I said, you know, I found something, but I had no idea what it means.
#
They said, all right, let's take it from the top, and I said, it looks like older women
#
in the Northeast do a lot of their shopping on Thursdays, and he said, oh, yeah, that's
#
What else did you find?
#
And I was just ecstatic, you know, that I found something really interesting that made
#
sense that I hadn't really told the machine to find.
#
I just told it to look for unusual shopping activity, and it said, you know, older women
#
on Thursdays, you know.
#
And there were many other patterns like that in the data that they were completely unaware
#
You know, the manager, Neil, said, oh, really?
#
Wow, wow, we didn't know that, right?
#
And so that was like, that's interesting that these patterns that are emerging and the reasons
#
they're telling me in retrospect, like, you know, that, you know, some of them were easily
#
explained like, yeah, that's coupon day, but others that weren't, right?
#
And fast forward four years, right, I was now in Wall Street, I'd taken some years off
#
from academia, and I was with a trading group.
#
And I told them, I said, you know, just give me all your data, and I'll tell you if you
#
could have done better.
#
And they said, well, you don't need to know anything about what we do.
#
I said, no, just give me your trades.
#
And again, you know, hocus pocus, I cranked them through my genetic algorithm, I come
#
back to our weekly meeting next week on Fridays.
#
And so the head of the group said, so, Vasanth, what'd you find?
#
I said, I found something, but I have no idea what it means.
#
And he says, so what'd you find?
#
I said, well, you know, when the 30-day volatility is in the lowest quartile, your trades are
#
three times as profitable as they are otherwise.
#
There's silence around the room for five seconds.
#
And the head researcher says, yeah, I looked at volatility.
#
And the head of the group says, Frank, you know, shut the fuck up.
#
How long have I been telling you to look at volatility?
#
And this guy who knows nothing about what we do tells us it matters, right?
#
And then there was like, you know, shouting and people sort of accusing each other of,
#
you know, being dumb and, you know, missing the picture.
#
And I was just watching this thing and saying, can someone tell me what's going on here?
#
And they said, no, but we've observed whenever volatility spikes, we lose a lot of money.
#
So it's interesting that you're telling us this knowing nothing about what we do.
#
And it was six months later that I realized why I was observing what I was observing,
#
you know, because I went into the literature of finance and, you know, and then I found
#
the reasons for what I'd found.
#
And this has happened to me pretty much in every domain I've looked at.
#
You know, I mean, I, you know, I worked with the San Antonio Spurs for a couple of seasons
#
looking at their data, you know, sports, basketball, completely different context, again, the same
#
There's certain things that pop up, you know, for example, you know, having to do with,
#
you know, the performance of a team, whether they're playing at home versus away, and,
#
you know, why do they perform worse when they're away?
#
You know, just things like that, that was just in the data that you could extract and
#
then say, wow, like I had no idea that this even existed.
#
And then finding the reasons for why those patterns exist is often a very enlightening
#
exercise, you know, reveals a lot of things about the domain that you may not have been
#
And to me, this is one of the promises of data science is that it can nudge you towards
#
things that you might not even have thought of, right?
#
I mean, you talked about AlphaGo and, you know, people have said, wow, it came up with
#
these moves that we never would have thought about, right?
#
Machines look at the world differently from humans do, you know, and they often reveal
#
things that surprise us in retrospect.
#
And that's where that quote came from, which is that patterns often emerge before the reasons
#
for them become apparent.
#
Yeah, this sort of reminds me of, and this is not apropos of either AI or whatever.
#
But I thought of the phrase, you know, when you spoke about looking for reasons after
#
the patterns emerge and I thought of the phrase, the interpreter, which is a phrase that Michael
#
Gazaniga used when he did his famous experiments on patients of split-brain epilepsy, I guess
#
you've heard about that, right?
#
I have not, I'm sorry to say, but it sounds really fascinating.
#
Yeah, yeah, no, no, I'll go through it for the benefit of my listeners.
#
So this really happened in, I think, the 50s or the 60s and Gazaniga recently, I think,
#
wrote a book called Human or Something where he talks about it.
#
But this is like a seminal experiment mentioned in various books where one of the ways of
#
solving or, you know, helping to mitigate split-brain epilepsy was that you cut that
#
part of the corpus callosum, I think, which connects the right brain and the left brain.
#
So when you sever that, they can't communicate to each other.
#
So he tried this experiment with patients on whom this operation had been done, where
#
you basically then divide their field of vision.
#
So the right brain, of course, controls the left eye or whatever.
#
But essentially, you divide the field of vision, the two halves of the brain aren't talking
#
And you show, you know, what the part of the field of vision that the right brain can perceive,
#
you show it something like, you know, ask for a glass of water or say this or go to
#
And the person would do that.
#
He would follow the instruction.
#
And then he would be asked, why did you do that?
#
And he would have no clue.
#
So the left part of his brain would make up an explanation which was completely unrelated.
#
And it would actually believe that.
#
And he called, therefore, this function in the brain, the interpreter.
#
And reading about this was a sort of a moment for me because then I realized that, you know,
#
many of the things that we do in our lives, we think we do them for reasons, but we could
#
be making them a post facto, we could be rationalizing them along the way.
#
And our actual actions could be just caused by a variety of instinctual things which we
#
are not aware of at a conscious level.
#
And I'm sorry to interrupt because there's so much I want to say on that, right?
#
Because there's at least two strands to what you've said.
#
One is you may be convincing yourself that you found the reasons and fooling yourself,
#
which is absolutely right.
#
You know, and my sort of hypothesis is that in areas where the domain is much more well-defined,
#
that's an easier problem, right?
#
So when a problem has a high degree of predictability, it has a good theory, you can actually have
#
more confidence that you've actually interpreted the pattern correctly or for the right reasons,
#
So that's a fascinating line of thinking.
#
And you're absolutely right, that even though you're trying to now explain the reasons,
#
it's not always the case that you'll be successful or find the right reason for that, right?
#
So the other strand, and I don't know if you've read this book called My Stroke of Insight?
#
Oh, you will find that fascinating.
#
That book is, you know, what I was saying is more interesting than the talk because
#
it really gets into the story about this individual who's a brain scientist and she has a stroke
#
where her left brain is impaired.
#
And so she's aware that her left brain is impaired.
#
And so it's this whole story about right brain, left brain function, right?
#
She said she felt a sense of bliss because the right brain apparently is a parallel processor
#
that just sort of takes in the information as it's coming, just streaming in blissfully.
#
And the left brain is the logical one, you know, logic, time, reasoning, you know, all
#
of that stuff and how these, you know, two halves communicate.
#
So since you mentioned this experiment, I think you'll find this book really fascinating,
#
you know, both from sort of a scientific view and also a humanistic view because she describes,
#
you know, her many months in the hospital, you know, and just, you know, and she describes
#
When her mother came to visit her, she could not recognize her mother.
#
She had no idea who this person was, but she felt a sense of warmth.
#
You know, there was something really nice, you know, that was, you know, exuding from
#
that person in that aura, right?
#
So she had the intuition, which was this right brain kind of thing, but no logic.
#
And it's a story about how she actually then learned to sort of bootstrap her left hemisphere,
#
you know, and get things back to, you know, as normal as they can be.
#
But I think you'll find that book really fascinating and it's a great TED talk as well, my stroke
#
I'm absolutely going to look it up.
#
There's another larger question I saved for, you know, the last part of the show, but since
#
we're on the subject, I think I might as well ask it now, which is, I mean, a couple of
#
And one question is that, you know, the more one reads about AI, one realizes that, you
#
know, our brains are also machines and in some ways they are magnificent machines and
#
in some ways they are very flawed machines and, you know, so on and so forth.
#
So when we learn about AI, it seems to me that there are three possible phases that
#
we can go through with regard to learning about AI and what AI can do with data.
#
And many people possibly don't reach the third one.
#
The first is, of course, excitement, that moment of magic when you realize what it can
#
do, which, you know, you described what happened to you at the lab and so on and so forth.
#
The first time someone, you know, from a village may see a GPS on a smartphone.
#
And that can also be a moment of enormous magic because, my God, I mean, it is magical
#
technology if you think about it, you know, in the 1960s.
#
And the next phase I'm guessing would be and that I see in a lot of policymakers, for example,
#
is an arrogance because data gives them so much power, you know, having all these tools
#
at their disposal gives them so much power.
#
And already, as it is, you know, humans and government often suffer from what Frederick
#
Hayek calls the fatal conceit.
#
So you know, when you have so much these great tools at your disposal, does that then accentuate
#
the arrogance where you think that you can, you know, achieve ends, where you think you
#
are superhuman in a sense, with the help of these, whether it's in reordering society
#
or in just gaming markets or whatever it may be.
#
And the third phase I would postulate, and I'm just completely thinking aloud, and I'm
#
sure much greater thinkers than me have gone through all of this, but you know, after excitement
#
and arrogance, I'm thinking the third phase could well be humility, where you realize
#
your own inherent limitations and that sort of that veil of arrogance that makes you believe
#
that you're something special in the universe.
#
That kind of falls apart where you realize that you are actually in so many ways, besides
#
being mortal, of course, you are in so many ways, hopelessly inadequate, even in this
#
one thing that we assume that we can do better than other animals.
#
There's a whole bunch of stuff in what you've just said.
#
So let me take it at two levels.
#
One is just the capabilities of AI.
#
And the second part, which you were getting at, which is, you know, what you call arrogance,
#
but it's sort of this unbridled use of data and the power that you get from doing it.
#
And what that says about how we organize society, because that's probably one of the biggest
#
questions of our time right now is, you know, the internet was supposed to be free, right?
#
It was supposed to be free, empowering, all that kind of stuff.
#
I don't know how you feel, but to me, the internet of today doesn't feel particularly
#
empowering in some ways, but very much so in others.
#
It's become a much more complicated space than it used to be 25 years ago when it was
#
sort of in its relative infancy.
#
And we're seeing different countries adopt different models of governance of data and
#
And I've described four of these being the US, Europe, India, and China, and we can come
#
back to that in a little bit.
#
But that's one of the big questions of our time is how will we govern AI?
#
How will we govern these platforms, which have become AI platforms, essentially, because
#
they're very data intensive, very automated.
#
So that's one of the big questions.
#
But there's another thing you talked about, which is this sort of excitement and arrogance.
#
So I want to say one thing, which is that as excited as I am about AI and the progress
#
we've made, we're still in the Bronze Age, right?
#
These are like very early stages of AI, right?
#
Machines have gotten incredibly good, but that's relative to what they were, which is incredibly
#
So they were stupid, they were dumb, right?
#
Now they're pretty amazing relative to our expectations, right?
#
And you talked about people have become sort of blasé about all these advances, and it's
#
Like my students, I mean, I remember the first time Watson became the Jeopardy champion,
#
and I asked people, I said, so what do you think of this?
#
Like, isn't that amazing?
#
And they said, yeah, it's pretty good.
#
And I was like, what do you mean it's pretty good, right?
#
I said, your expectations are just crazy, right?
#
Because to me, my expectations were, I guess, so low in retrospect.
#
The very first time I saw a search engine and able to type anything in natural language,
#
I was amazed that it came up with anything remotely useful, because in my experiences
#
with natural language, 20 years ago, it was just terrible.
#
They'd just look at keywords and try and do their best and give you something and throw
#
They were pretty stupid at the time, right?
#
But I was amazed that it would actually give you something useful, right?
#
Now the fact that you can talk to Alexa or Siri or whatever, it just boggles my mind,
#
I never imagined I'd see this in my lifetime, that we have systems that we can actually
#
talk to and that seem to understand, quote unquote, a fair amount of what we're saying,
#
at least understand it at sufficient depth to provide a useful answer, right?
#
So that's pretty amazing, right?
#
So we've come a long ways, right?
#
But I want to come back to this, which is we're still in the Bronze Age.
#
We've got a long way to go here.
#
So I talked about that medical diagnostic system that I first came across in the 79
#
that I thought was amazing, right?
#
It was engaging in a dialogue.
#
Now the difference is that you can actually feed the system an X-ray image.
#
You can feed it all of this data that you're seeing as a physician, which you couldn't
#
You have to look at the image, you have to see it, and then enter it into the system,
#
and then the system will, quote unquote, reason with it, right?
#
Whereas now we've gotten to the point where machines have become good at seeing things.
#
And so that gets tossed into the mix as well.
#
But are we at that point yet where machines function at an equivalent level that human
#
For the most part, not really.
#
That humans still hold that edge because of our intelligence.
#
And humans and machines see the world very differently, by the way, and we can come back
#
to this a little bit later.
#
But we're not at that stage yet because machines still make lots of mistakes.
#
And this is something that I've talked about a lot in my research as well, about when do
#
And my simple theory is we trust them when they don't make too many mistakes and those
#
mistakes don't have very severe consequences, right?
#
And at the moment, we're just grappling with trying to understand this in more detail.
#
What kinds of mistakes will driverless cars make?
#
Will they plow over kids routinely?
#
Will they slam into poles or whatever, right?
#
Someone told me, a chief scientist from one of these driverless car efforts told me that
#
if you drive in driverless mode these days, you'll probably die four or five times in
#
So we're not good enough yet in terms of driverless cars.
#
But the larger question really is in all of these domains where we apply AI, whether it
#
is medicine, healthcare, or whether it's payments or whatever you can think of, the question
#
that we always have to ask ourselves is what can go wrong and how do you really avoid catastrophes
#
How do we really bridle the power of AI without really exposing ourselves to these huge risks
#
that these machines can subject ourselves to?
#
So even when you talk about, you mentioned data and I don't know whether you mentioned
#
big brother observing society and having access to all these things, the question is like
#
what can go wrong and are you willing to deal with the consequences?
#
It reminds me of a talk by James Robinson several years ago.
#
He wrote this book called Why Nations Fail.
#
And it's all about the importance of institutions and checks and balances and that societies
#
that have more advanced institutions generally tend to do better as opposed to the ones that
#
And I remember after his talk, and this was in Goa, he and I went out into town and I
#
asked him, I said, you know, are your assumptions still valid in the age of AI and data?
#
Because what I see China doing blows my mind, right?
#
Maybe the assumptions you're making are assumptions of an old era where information flowed like
#
molasses and people weren't informed and so decentralization was better and let people
#
close to the phenomenon make the decisions, right?
#
And let's have strong institutions with checks and balances.
#
And my question to him was, are we in a different era now where because there's so much data,
#
there's so much information that we can actually have very effective centralized control?
#
And I believe that the Chinese government believes that, right?
#
I think they actually believe that their leadership has the wisdom that given the power, they
#
will wield it in a responsible way, right?
#
It remains to be seen what, you know, how things will look 20 years from now.
#
It'll be a fascinating experiment, this one, as to where China is 10 or 20 years from now
#
with its very centralized model of data gathering and control and decision making versus these
#
democratic societies that are just much more freewheeling and messy, right?
#
Now, we've seen that with Corona, the centralized model has actually worked better, right?
#
North Korea is my favorite example where, you know, they have two to four deaths a day,
#
That's amazing statistic, right?
#
That's because there's a congruence between what the government is trying to do and what
#
individuals want, right?
#
They all want to keep deaths down and infection rates low.
#
And so individuals are willing to trust the government with this one, right?
#
But it's a really messy kind of a question, right?
#
And it doesn't have an easy answer as to like, where do you really draw the line between
#
people and control saying, yeah, you know, we're responsible, trust us.
#
If we just had the right data, we do the right thing, which South Korea has done and demonstrated.
#
So it shows that it is possible for individuals and the government to work together, you know,
#
to solve a larger problem for the larger social good.
#
But how we do this, and by the way, South Korea is a democracy, right?
#
But, you know, so different models, at least for COVID have worked very differently.
#
But the larger question that you ask is a fascinating one, right?
#
Which is how do you really find that balance between the arrogance and the capability, right?
#
The arrogance of people in charge saying, yeah, if we have the data, we can do amazing
#
things versus them actually doing some amazing things, but doing some pretty screwed up things
#
Because they can't help it.
#
And they have access to the data and they don't have accountability, right?
#
So this is what we're dealing with.
#
And this is what I see lawmakers dealing with in the US, in Europe, in India.
#
And you know, I'm sure the Chinese are thinking about this as well.
#
So it's a fascinating situation which we're in, which is that the internet isn't what
#
It isn't this freewheeling internet, but it's something that's become so powerful now that
#
we need to figure out how to govern it.
#
And the choices we make in the next few years will have a huge influence on society in these
#
So you've touched on many things here, and I'm going to, you know, through the course
#
of this conversation, come to each of these big questions separately.
#
For example, when should we trust data?
#
You know, what are the dangers of AI going too far?
#
What are the dangers of states going too far?
#
We're using AI for their totalitarian purposes.
#
How should we govern the use of AI today?
#
All these are big questions that come to them.
#
But first, a sort of a response or even a musing about what you said about the internet,
#
which sort of also brings up a related, both an ethical question and also a question of
#
how humans regard themselves, which is, you know, and I completely agree with you that
#
the internet has, you know, enabled amazing progress in various ways, but also it has
#
made our discourse so polarized and toxic and caused so many dangers.
#
And I think part of the reason we were all sort of hopeful about the internet at one
#
point in time, and I think more or less correctly so, is that we assumed that it would, you
#
know, enable various aspects of humanity to express themselves to their fullest.
#
But that held an implicit assumption, which I think was wrong, which is that, you know,
#
all of those getting expressed inevitably means a march towards virtue or a march towards
#
a better world, which is not the case because there are many aspects of human personality
#
which are very ugly, for example, our tribal instincts, our, you know, what we see on the
#
internet today, for example, is that people, especially on social media, that there is
#
constantly a move towards the extremes.
#
People form ideological echo chambers.
#
Confirmation bias kicks in when they look at the world around them.
#
You know, dialogue has stopped.
#
People aren't talking to each other, but past each other and at each other and all of that.
#
People are constantly signaling to raise their status within their own in groups.
#
And as a consequence, everything is driven to extremism.
#
And also that reality doesn't matter so much as narratives.
#
And you basically pick your own narrative and then you live in that world constructed
#
in your head and you can just get away with that.
#
And there are a lot of bad consequences of that.
#
The one that I see most often in what I do is just how bad the political discourse has
#
But then the ethical question that comes up, and I've had episodes in the past on this
#
as well where I've been grappling with this issue, is that a lot of these consequences
#
come from the voluntary actions of humans.
#
That you know, if I, for example, want to be in an ideological echo chamber and I want
#
to consume only the sort of narratives that satisfy me without giving a damn about whether
#
they conform to reality or not, then the question that arises is that one, should somebody get
#
in the way of my voluntary choices and would it be correct for them to do so?
#
That's really an ethical question.
#
Because then you're telling people what is good for them or what they should do.
#
And that becomes a very paternalistic kind of approach.
#
And also whether that will make a difference anyway, because now that we've been empowered
#
with tools that allow us, for example, just to take one domain that allow us to believe
#
whatever narrative we want about the world, we are going to do that anyway.
#
You might say that, no, the algorithm should give me more broad-based views and news from
#
across the place, but I will still believe what I want.
#
So, I mean, this is a bit of a rambling, but any reactions?
#
No, it's a fascinating question, right?
#
What you're really touching on is, well, the internet was supposed to be free.
#
So what if I have a bias, I should express it freely.
#
After all, I'm doing it voluntarily.
#
And I can completely see the logic for that.
#
The question you have to ask yourself is, is it voluntary?
#
Is your response voluntary or is it orchestrated by an algorithm?
#
And the disturbing thing that I think we're seeing sufficient evidence of is that if I
#
have an AI machine, that machine has what we call an objective function.
#
That machine is driven, its behavior is driven by an objective function, right?
#
So in healthcare, if I'm diagnosing cases, you know, I have a machine that's learning
#
how to diagnose cases, it's trying to minimize the errors it makes, right?
#
That's its objective function, is to minimize error, right?
#
So that's how it learns.
#
Now let's take this in a social context, right?
#
Now I have an algorithm and it's an algorithm and I'm making money through let's say advertising.
#
And now I tell the algorithm, hey, just go for it.
#
Maximize my advertising revenue.
#
You figure out the best way to do it, right?
#
I'm not going to tell you.
#
And the algorithm goes, you know, it does its hocus pocus and it realizes that, oh,
#
people are more likely to accept friend suggestions when they see that you've got more people
#
in common with this friend that's just been suggested to you by this algorithm, right?
#
And the more likely it is that you accept the invitation, the more likely it is now
#
that you're connected more densely to this cluster of people and this more dense connection
#
increases your engagement with the platform.
#
You now spend more time on it.
#
And now that your engagement has increased, we find that, hey, like our revenues go up
#
when people are more engaged, surprise, surprise, right?
#
Now if you're the operator of that algorithm, what are you going to tell it to do?
#
You've got an obligation to your shareholders, right, to maximize ad revenue.
#
Well, you're going to tell the algo, just go for it, you know?
#
Keep closing these triangles all day.
#
And sure, we get echo chambers, but what the hell, that's someone else's problem, right?
#
It's almost like factories in the old days that polluted the rivers and said, yeah, we're
#
just polluting the river and polluting the air, but for us, it's free.
#
It's society's problem to clean that shit up, right?
#
That's what economists call an externality that, you know, General Electric and it's,
#
you know, nuclear power, whatever, is polluting the Hudson.
#
Well, you know, and then we realized, gee, that's bad for society, we're going to find
#
them for doing that, right?
#
So we have the EPA that comes in and says, okay, now we have an environmental protection
#
agency because you guys are polluting the environment, right?
#
We need to protect the environment because you're polluting it, right?
#
And you're causing rates of cancer to go up, people are sick, they're dying, right?
#
We need to do something.
#
It's the same thing happening in social media, if you really think about it, right?
#
But the trouble is we don't really understand it well enough, right?
#
We do know there's plenty of evidence to suggest that these algorithms do some nasty stuff
#
as a side effect of the fact that they're trying to maximize engagement.
#
So they're just trying, they're just doing what they've been told to do, right?
#
And they don't know that they're causing, you know, teen suicide rates to go up or things
#
So when I talk to high school counselors, they say that they're seeing increased levels
#
of stress among teenagers, right?
#
But we can't really figure these things out unless we have some data to be able to measure
#
these things and be able to correlate them with what's going on in society, such as social
#
At the moment, we don't have this data, so the jury's out.
#
But this is the kind of stuff we need to be thinking about, which is, are these things
#
really voluntary or are you just being a tool that the algo is kind of, you know, playing
#
around with and the algo is not being evil.
#
It's just doing its thing in accordance with its objective function, but it's creating
#
a mess as a side effect of what it's doing.
#
And that's what we're seeing emerging these days, and that's why you have all this concern
#
and lawmakers, you know, both Democrats and Republicans, accusing social media platforms
#
of being biased, of spreading misinformation, you know, all that kind of stuff, when the
#
truth is that, look, you know, we didn't, we haven't really thought about these things
#
and you can't blame these platforms for doing what they're doing, right?
#
They haven't, you know, egregiously violated the law.
#
You know, they did things that were shady, but within the limits of the law.
#
And yes, in the early days, they tell you, we'll protect your data, we won't share it.
#
And then they said, well, maybe we'll share it.
#
And then they said, screw you, we're just going to do whatever we want with it.
#
If you don't want us, you know, we don't want you, right?
#
We're now sufficiently powerful that we don't need you.
#
We have all the data anyway.
#
So that's the situation we're in, where a lot of these platforms have just been bad
#
They've behaved irresponsibly.
#
And we're seeing the side effects of that.
#
So it's a more nuanced question than saying, well, you know, why should we get in the way
#
of freedom, which I completely agree with.
#
But I think it's more complicated than that because we're in the whole new world here.
#
We're in the whole brave new world here, where, you know, algos have assumed a level of control
#
that they didn't have, you know, 10 years ago.
#
We are indeed in a brave new world here, which is why your new podcast is, you know, much
#
awaited by me and others.
#
You know, I'll come to the political economy and the governance aspect of what do we do
#
about big tech later on in the second half of the show.
#
But I'd like to sort of stay on this for a moment to kind of take a step back and get
#
a little meta and just look at a sort of a broader philosophical aspect of it.
#
Because one, you could argue that there is no such thing as free will.
#
However, we should still behave as if there is.
#
So instead of people's actions being determined by random events, it is sort of perhaps more
#
But the other aspect of that is like, let me zoom into the political discourse, which
#
is something, you know, I kind of think about a fair bit and therefore I can talk about
#
Like, I think a lot of what exacerbated, like, I think a seminal moment in this kind of political
#
polarization was the introduction of the Facebook like button.
#
Maybe around 2008 or 2009 or whatever.
#
Because then what happens is that people start craving likes because they get their dopamine
#
Similarly, you know, to Twitter context, they will start craving retweets because they want
#
They want their notifications to fill up with validation.
#
Now, where does that come from?
#
That comes from a human need for validation, which those tools are playing to, which is
#
why they are so effective.
#
Now, this leads to polarization because to maximize your likes and to maximize your retweets,
#
you take more and more extreme positions.
#
You do more and more which is signaling whether you are on the left or the right or whatever.
#
You are doing all that and, you know, and these echo chambers form and what Cass Sunstein
#
calls group polarization then automatically happens.
#
But there are a couple of points here.
#
One point is that these functionalities also serve a useful purpose.
#
For example, almost every ad that I see these days is targeted towards me and is therefore
#
useful to both me and the advertiser in a mutual sense.
#
You know, when I go to YouTube, I am damn happy that their algorithm is tracking my
#
preferences to such a fine degree because it is serving up much more relevant content
#
and is acting as a fantastic filter in that sense.
#
But the other aspect of it is that there is nothing teleological or intentional about this.
#
It's one thing that say when a foreign country says that I want to, you know, spread misinformation
#
and I want to influence an election, there is an intent, there's a teleology there.
#
That's not necessarily the case when it comes to a social media platform, putting something
#
there that actually makes your experience pleasurable in certain ways like the Facebook
#
like button or the Twitter retweet button.
#
So you know, to cut a long story short, what I'm coming at is that all the problems that
#
we see emerging out of social media and out of the internet and all of that, and I'm not
#
using this as an argument for one or the other, I mean this just in a descriptive sense.
#
These are problems inherent in humanity, not in technology.
#
It just so happens that technology has, for example, played on this need for validation
#
and you know, created this effect and so on and so forth.
#
But the deeper thing that we perhaps need to think about and that should give us a cause
#
to ponder and perhaps just be more humble about our species as a whole is that we are
#
deeply flawed and messed up and you know, a lot of that finds expression through technology
#
We are very deeply flawed and quite deeply messed up and our history has been quite brutal.
#
I mean, we've come from, I guess, you know, being savages to being feudal, you know, I
#
mean, so we, you know, humanity has gone through sort of, you know, savagery, you know, followed
#
by slavery and then feudalism, you know, and then law, right?
#
So we've kind of then created societies around law, right?
#
So that's been sort of the progression of humanity.
#
It's been a positive progression and now we're in the age of tech where we'd actually
#
implement law so much more easily and actually it'll turbocharge law as well.
#
So we've had like a brutal history, but I think the solution is not control because
#
who the hell knows what the right amount of control and what the right control is, right?
#
The solution isn't control, but it's transparency, right?
#
We need to be aware of our savage instincts, of our limitations, right?
#
So we need to become more self-aware.
#
And that's where I see the solution, right?
#
So I, you know, so I tend to be an optimist, right?
#
I'm not, you know, despite these dysfunctional kind of, you know, dark scenarios that people
#
paint about the future, I actually think the future is going to be pretty amazing because
#
I have optimism in the ability of mankind to sort of, you know, rescue ourselves from
#
The stakes are too high.
#
You know, we don't want to like destroy our society.
#
We've come close at a few points, you know, with the Cold War and things, but we've sort
#
of been aware of, you know, of some of these risks that technology has introduced in society,
#
The fact that we can destroy ourselves, you know, with a push of a few buttons, right?
#
So we're in a society that has tremendous destructive potential.
#
And the internet is similar.
#
It has some amazing positive potential, much of which we've seen, right?
#
But we're beginning to see some of these dysfunctional aspects of the interplay between technology
#
I agree with you that there's nothing evil about the technology and the technology itself,
#
one might even argue it's neutral.
#
It's the way it's employed that we need to be aware of.
#
And that's what governance is really all about.
#
That's what internet governance is really all about, about being aware of our limitations,
#
about being aware of our history, about being aware of how we're interacting with these
#
platforms and going into this with our eyes open and then saying, hey, what's the best
#
The trouble is when things get dark, right?
#
When people function in the dark, that's when shit happens, right?
#
And we've seen this in every industry, like finance, like, you know, Wall Street with
#
the bad boys for decades.
#
They were the flogging horse, right?
#
It used to be an expression.
#
Look at Wall Street, right?
#
Wall Street was synonymous with evil, right?
#
Because people had gotten greedy, you know, they would treat customers badly, they would
#
do things that were unethical because regulation didn't exist that was, you know, geared towards
#
So in a sense, regulators are always sort of behind the eight ball, right?
#
People close to the ground find ways to function, find ways around the system and bankers will
#
They were functioning within the law, but were doing unethical things, right?
#
And that led to all kinds of problems.
#
There were lots of NASDAQ dealers that got fined in the 90s, you know, for, you know,
#
throwing customers orders into the garbage or not answering phones and, you know, stuff
#
like that, just bad stuff.
#
And so regulation came in and said, hey, you know, like, we see this thing going on, we
#
have to put some constraints around it so that these people don't destabilize the system,
#
they don't favor customers over others, right?
#
So the reason the financial system of the US is sort of the most trusted in the world
#
is because it's been through all kinds of phases, bad stuffs happen, people have come
#
and looked at it, they've sorted it out, you know, to the degree that people have trust
#
in the system and systems don't function unless there's a fundamental degree of trust
#
That's why people trust the US financial system, even though it has its, you know, moments
#
For the most part, it's things have to add up, right?
#
There's a regulation you have to follow, you know, you just can't do whatever you want,
#
Whereas, you know, Mr. Zuckerberg can turn a dial and do what the hell he wants and no
#
one is aware of what's going on, right?
#
So the solution to it is transparency, we've got to know what these guys are doing, you
#
know, what's going on inside these platforms, right?
#
If, you know, Jane and Gene are working for me calling customers all day, the regulator
#
wants to do, like, you know, wants to know what did Jane and Gene do all day, but if
#
Jane and Gene are machines, do I just get a pass because they're robots and they can
#
call people all day and do stuff under the radar?
#
No, that doesn't make any sense, right?
#
Why should we treat humans and machines differently if they're doing the similar kind of thing
#
and, you know, resulting in risks to the system, right?
#
So in financial services, we figured the stuff around humans first and then around systems
#
because the financial industry was one of the first to automate and adopt technology
#
because the stakes were the highest.
#
And then, you know, we learned to wrap sensible regulation around it, mostly sensible anyway.
#
And so that's where I see ourselves more generally in the social media space, right?
#
So I see that as kind of the current Wild West, which, you know, the financial industry
#
was in the 70s and the 80s and 90s.
#
Those are relative Wild West days.
#
That's where I see ourselves now in terms of internet governance.
#
So we need that transparency.
#
And what I think we're talking about is what does that actually mean, right?
#
What does that look like?
#
You know, it sounds great, but, you know, to Zuckerberg and Dorsey, transparency means,
#
well, you know, we'll give you quarterly reports that will tell you all the crap that happened
#
And believe me, we'll be good about it, right?
#
You know, trust us, right?
#
You know, would we have trusted the bankers of the great financial crisis if they said,
#
you know, yes, sorry, I know we were irresponsible and we did all kinds of off balance sheet
#
transactions, but believe me, we've learned and trust me, we'll never do this again, right?
#
Yeah, good luck with that one, right?
#
That's where we're at with internet platforms and they're going to go through a similar
#
And it's about time, right?
#
Not to say that this will be fast and expeditious and effective.
#
I'm sure it'll be messy, but we're heading in the right direction and being an optimist,
#
I'm optimistic that, you know, we'll solve this problem.
#
So you know, you described your life as in terms of what Jerry Garcia spoke about what
#
a long strange trip it's been and indeed because I just realized that you've been on two frontiers.
#
One is a wild west of Wall Street in the 1990s when you worked there and now the current
#
sort of wild west of this current situation.
#
And we'll talk more about what to do with Jane and Jean, but if Jane and Jean are listening
#
to this, which I have no doubt they probably are because, hey AI, they can relax a bit
#
because we'll do that after a quick commercial break.
#
So Jane and Jean, you have one minute to get your act together.
#
As many of you know, I'll soon be coming out with a four volume anthology of the Seen
#
and the Unseen books organized around the themes of politics, history, economics and
#
These days I'm wading through over three million words of conversation from all my episodes
#
so far to curate the best bits.
#
And for this to happen, I needed transcripts and that was made possible by a remarkable
#
young startup called TapChief.
#
TapChief at TapChief.com is a digital platform that allows companies to outsource work to
#
their network of freelancers and TapChief's network includes more than 125,000 people
#
You want people to make you a webpage or design a logo or compose a jingle or do some digital
#
marketing for you, TapChief gives you an easy way to reach out to freelancers competing
#
I can say from first hand experience how valuable this has been for me and solve the problem
#
I was actually a bit worried about.
#
So do go over to TapChief.com and check out all that TapChief has to offer.
#
Maybe they could solve your problem too.
#
Welcome back to the Seen and the Unseen.
#
I'm chatting with Vasundhar about the brave new world we are in and brave new world actually
#
makes it sound exciting but he's optimistic and so am I so we'll just look at, you know,
#
we won't rename it to scary new world or the dystopian future or anything like that.
#
It is a brave new world.
#
Now, you know, I was going to sort of leave the last part of the episode for talking about
#
the current times about sort of what we ought to do about big data and all the dangers that
#
we can all agree it poses.
#
But since we are on the subject and I think I might as well go there now and here I found
#
you're writing on this very insightful and I'm in agreement as well because whenever
#
I've discussed this issue in the past it struck me that many of the problems that are arising
#
from the way our data platforms behave such as increasing polarization are social problems
#
and we have to find social solutions and I'm a bit wary of the state getting involved because
#
you know we always assume when we advocate state sort of getting into solve something
#
that it will always be benevolent and benign and it is rarely the case.
#
So you want to achieve things with as little state coercion as possible like even for the
#
financial crisis a lot of the things that went wrong like even going back to the bad
#
incentives of the Community Reinvestment Act in the late 70s or the moral hazard posed
#
by the way Fannie Mae and Freddie Mac behaved is sort of an example of what a state interference
#
The notion that you come up with actually empowers users more than coerces them and
#
this also comes from something that Ben Thompson of Stratituary once wrote which I agree with
#
so I'll just quote him where Thompson discussing the possible regulation of big data wrote
#
quote if regulators EU or otherwise truly want to constrain Facebook and Google or for that
#
matter all of the other ad networks and companies that in reality are far more of a threat to
#
user privacy then the ultimate force is user demand and the lever is demanding transparency
#
on exactly what these companies are doing and I came across this excellent piece that
#
you had written recently whether it will be linked from the show notes where you wrote
#
about three ways to increase social media platforms transparency and I love the solution
#
because here what it is is it's not the state using the strong arm on specific companies
#
to say do X do Y do Z it's just making them more transparent so then users can decide
#
for themselves whether they want to engage with those platforms or not.
#
So take me through these sort of these three ways to increase the transparency so to say.
#
So not surprisingly the solution that I'm proposing has been motivated by the financial
#
services industry that we were talking about one that is regulated and that for the most
#
part seems to work all right at least in terms of engendering trust in the system.
#
So if you think about it some of the assumptions that we've been working under are a little
#
bit outdated regarding let's say anonymity.
#
So the first thing I proposed is that and by the way in 2016 when I first suggested
#
that we should regulate the social media platforms I got a lot of backlash from people for being
#
un-American and all that kind of stuff because people felt that you know this is a free country
#
they should be able to do whatever they want but you know it was becoming clear that that
#
really wasn't going to work right because one of the things about a democracy is and
#
people don't often recognize this is that it's not just about rights but it's also
#
about obligations right so yeah we have rights in a liberal democracy rights to be free but
#
we also have an obligation towards our citizens right we can't go around killing people like
#
there's a certain sort of expectation and norms that exist in democratic society and
#
so what I suggested was that we need user transparency these platforms need to know
#
who the users are who you're dealing with like a bank knows who it's dealing with right
#
you can't just set up a bank account without adequate credentials and authentication like
#
to prove to them that you are who you say you are now India by the way has a fascinating
#
solution to this with the Aadhaar platform like really forward-thinking solution but
#
this is what banks do are you who you say you are like know your user right so in banking
#
it's called know your customer now in the social media space customers are actually
#
advertisers because they're the ones paying you but what I suggested you should know your
#
users if someone is on platform who are they right can they authenticate who they say they
#
are because I think they have a responsibility not to you and me they can still use a pseudonym
#
and function you know anonymously but the platform must know who the hell am I dealing
#
with who is this a real person they need to know is this a person is it a bot what am
#
I dealing with so that's the first thing that I suggested is you know since we're all talking
#
about transparency well let's just be transparent to the platform right and so they know who
#
they're dealing with is that an invasion of privacy no it isn't it's no more an invasion
#
of privacy like you know if you were required to provide your bank with information that
#
that's not an invasion of privacy that's just like you authenticating and guaranteeing who
#
you say you are and by the way you're also agreeing to behave yourself right with the
#
bank right you're agreeing not to conduct fraudulent transactions and you're not going
#
to do trades that destabilize the market and you know you're agreeing implicitly to a lot
#
of things when you sign up with an account so it's not an invasion of privacy right they
#
should know their user right and yet so far their motivations have been exactly the opposite
#
they'd rather not know their users right it's actually more profitable for them not to know
#
their users right like you want another account come on over sign up say whatever you want
#
and so my solution is yeah you can say whatever you want but we need to know who you are right
#
authenticate yourself the platform the second thing I propose is that just like I can as
#
a regulator go into a bank and say hey show me that you haven't been favoring some customers
#
over others and I've had to do that twice myself over the last 10 years to regulators
#
in the hedge fund that we operate it's a machine learning based fund regulators come over and
#
they ask us all kinds of stuff we have to show them all our trades how they were allocated
#
to clients we weren't favoring anyone they all made the same amount of money right so
#
they come in and do a bunch of tests on us and tell us whether we passed or failed now
#
in financial services these guys have been doing this for a long time they know what
#
they're looking for right then looking for the bad behavior looking for so what I'm proposing
#
is something similar right in the social media space if you're actually generating revenues
#
by making recommendations well then keep her keep a history of it you know you don't need
#
to share that with the rest of her but just keep it in case regulators want to come in
#
next year and say so like have you been making recommendations tell us show us right and
#
you should be able to do that at the moment there's no such requirement right there's
#
absolutely no transparency so that's the second thing the ability to provide audit trails
#
of your behavior right and this by the way will get rid of a lot of problems that we're
#
having right now right the very fact that you know you're being observed makes you behave
#
right not that you will that that they'll they'll come in and like want to see everything
#
but the fact is that they can right so you have to behave in accordance with how you're
#
supposed to right so that's the the second word was transparency in terms of the ability
#
to produce audit trails of your actions that was the second one and the third one was algorithmic
#
transparency which is what did Jane and Jean do all day anyway you know like broadly speaking
#
what are they doing are they soliciting well okay if they're soliciting how are they doing
#
it you know is it are they using legitimate data and after they do the solicitation stuff
#
what actually happens you know does engagement actually go up what were the consequences
#
of that we can actually look at that right so regulators should have the ability to go
#
in and see what the hell is going on inside this operation right so it's like operational
#
transparency through the algorithm so to me these are I don't know they seem like no brainers
#
to me like what's wrong with this what's wrong with these three forms of transparency yeah
#
you can keep doing your secret sauce and have your algorithms and smart algorithms and whatever
#
but we just need to be able to see what you've actually done just like we do in other industries
#
so that's what I proposed is that these are simple ways to increase transparency without
#
really much in the way of downside as far as I can see other than the fact that you
#
need to store a lot of stuff no that's fair enough and I like these solutions because
#
none of these seem like state overreach to me which is something that I'd otherwise be
#
worried about and one here suggestions to that account as well though I was a bit concerned
#
by your framing of it is you know obligations along with right so you don't mean it in the
#
same sense as our prime minister did where Narendra Modi keeps talking about how it's
#
not just about fundamental rights but fundamental duties which by the way is a phrase introduced
#
into our constitution by Indira Gandhi so it's like from one great totalitarian to
#
the other in a sense and what I keep pointing out is that listen it's okay to talk about
#
duties but we should remember that individuals don't have a duty to the state the state has
#
a duty to individuals and in fact the only obligations I would recognize on the part
#
of individuals is not to transgress on the rights of others which really you know comes
#
down to the framing of rights as well but yeah I mean as far as the state having an
#
obligation to its citizens is concerned I don't see how this kind of regulation is an
#
overreach in any way so I'm good with these except that thinking aloud my thought would
#
be that I totally agree that if everyone had to authenticate herself to the platform they
#
would automatically behave themselves but the other two I don't know what difference
#
they would make for example if you have algorithmic transparency and they show you an algorithm
#
that says that we connect people with like-minded people and we allow them to express themselves
#
in whatever way they want as long as they are not transgressing the laws or inciting
#
a crime in that case you know that would be perfectly legit you can't argue with that
#
and yet that does go all the polarization and all of that is sort of happening because
#
of that so you know would that really solve the problem we sometimes assume that many
#
of the problems that are being caused by big tech is somehow there are malign algorithms
#
which are doing malicious things and deliberately spreading misinformation but no I would argue
#
that you know there are you know algorithms which are completely benign like connecting
#
you to like the like-minded people and allowing you to express yourself which you know so
#
the problem is with humanity not technology and regulating technology won't necessarily
#
solve that and I know what you said isn't intended as a panacea obviously so but this
#
thought just came to mind you're absolutely right on that but this reminds me of some
#
of my colleagues who teach ethics and the thing you become aware of is that I can conjure
#
up a scenario where it's actually fine to lie I can conjure up a scenario where I say
#
okay you know you have two choices right one is you know if you just tell this little lie
#
the planet will be saved but if you tell the truth we'll all be destroyed what's the right
#
thing to do well the right thing to do is to lie right even though we know that you
#
should lie right so and that's the nuance here that we need to appreciate you know you're
#
absolutely right that you know look you can't like transgress and interfere with people's
#
freedoms and tell them what to do and tell these platforms how to operate their algorithms
#
right but if I conjure up a really extreme scenario right one that's just so compelling
#
right that you can't ignore it then what do you do so for example for me a compelling
#
scenario might be that so Facebook's been recording all its recommendations everything
#
to everyone right for the last year the last couple years and then we go back and look
#
at the stuff and we say you know what like this is all well and good but for people like
#
under 17 years of age it's leading to like increases in suicide and we can actually demonstrate
#
that you know quite convincingly that your algorithms that are doing whatever the hell
#
they're doing we have no idea are you know there's like very strong evidence that leading
#
to like a tenfold increase in teen suicide right like what would you do in that case
#
right would you still say well you know it's a free country like let him die right it's
#
free will they're just bringing them upon themselves they should be smarter than to
#
know that they should get you know sucked in by whatever they're doing clearly that's
#
not good enough right in the physical world we have all kinds of safeguards like with
#
food you're required to label the food and show what's in it right with something else
#
you know you can always come with scenarios where people have sort of decided that something
#
crosses a line and that is an acceptable behavior that's the situation we're in we just don't
#
know enough at the moment about the consequences of sort of the Wild West that we're in at
#
the moment right we have suggestive evidence that tells us we should be concerned but we
#
don't have all the data and we struggle with it right there was a Supreme Court judgment
#
last week in the US where the Supreme Court ruled that Governor Cuomo's edict to ban
#
gatherings let's say religious gatherings of over 50 or 25 in the red zone was you know
#
banned prohibited and the Supreme Court ruled it unconstitutional and one of the reasons
#
that you know one of the judges I think maybe it was Neil Gorsuch said that you know there's
#
no evidence that suggests that these institutions have been responsible for the spread of the
#
virus there's no evidence right and so you can look at that and so we're going to apply
#
the strict strict scrutiny test to this right now the question I have is supposing we were
#
South Korea right and we had actually measured these things right we'd measured how many
#
people gather the infection rates all that kind of stuff right and you present this to
#
the ninth Supreme Court justices and you say you know what you know your honorable selves
#
what we're seeing is that you know when gatherings go above 49 I'm just making this up you know
#
and the base rate of infections in that community is more than you know X the spread of the
#
disease tends to be tenfold higher than it is otherwise and here's the data to show it
#
right there's like 300 cases you know and and here are the statistics right would the
#
Supreme Court justices still come up with the same verdict I would think that they would
#
be affected by the evidence right so what I'm getting at is more transparency for society
#
in general where we can make these decisions based on the evidence right as opposed to
#
our value systems or beliefs or assumptions right that's that's the old way right the
#
brave new world will consist of data being you know having sort of first-class status
#
in decision-making and not just people's assumptions and norms right that is we should look at
#
data we should be influenced by evidence and that cuts across you know all areas of our
#
lives so you know it just sort of comes back to know there shouldn't be overreach that's
#
the last thing we need is government overreach but we do need some way where the government
#
is acting intelligently based on the evidence at hand and it's transparent right to me
#
that's the best of both worlds is the government is accountable and it's transparent and we
#
have a contract with the government we I mean individuals that we can agree that certain
#
uses of data are for the larger social good and we work out ways where we prevent these
#
transgressions from taking place right but we don't get an overly enthusiastic government
#
coming and changing the rules on us and that's the situation where we're in at the moment
#
we find ourselves in it is a brave new world right it's a whole new world of how we govern
#
the internet how we allow machines to influence and play a larger roles in our lives in a
#
way that's better all around for everyone you know better for society better for individuals
#
but we need to think about this and do it consciously because the rate at which we're
#
going that sort of Wild West ways won't get us there I'm optimistic that we are seeing
#
the problem in time and that we will walk away from the precipice and come up with solutions
#
that make sense so let me sort of probe a little deeper in the sense that I obviously
#
agree with you on the desirability of transparency not just in this domain but so many others
#
but what I'm a little skeptical about is the efficacy of it and the reason why that is
#
not a technical point but has consequences is that if one finds later on down the line
#
that greater transparency doesn't actually solve the problem then the state can use it
#
as a pretext to actually overreach now let me sort of drill down into one concrete and
#
ask you about sort of one hypothetical example that you brought up and ask you to get concrete
#
now you spoke about how if there is algorithmic transparency and we find that there is some
#
algorithmic action that has led to a rise in teen suicides right now every reasonable
#
person agrees that that's a problem and we should solve it the question here is number
#
one how do you establish causality when there are so many algorithms doing so many things
#
and so many influences otherwise number two even if you manage to pinpoint a particular
#
algorithm that says that shows that there is some causality that algorithm could by
#
itself both be benign and could have positive effects in another way for example an algorithm
#
that allows people to meet like-minded people and talk about things that they care about
#
could foreseeably lead to a sort of rise in teen suicides because you could have young
#
teens coming together and you know driving each other more and more towards despair but
#
at the same time you could have young teens coming together and sort of cheering each
#
other up and moving away from the precipice and the latter case would of course be unseen
#
what would be seen would be the suicides if I might invoke the name of the show so then
#
the question would be that first of all how do you determine causality in terms of which
#
algorithm causes to how do you separate out you know the ill effects of a benign algorithm
#
of the algorithm also did many good things such as connecting like-minded people like
#
you and me together which could have happened you know through social media for all that
#
we know you know and so when I drill down to the concrete it becomes difficult and then
#
if you leave it to human judgment that this caused that then human judgment will always
#
be flawed and when it's a judgment of the state will always end up enhancing their power
#
or I mean this is of course a non-political example but if there is a political example
#
then it will play to the biases of whichever side holds the levers of the state at that
#
point in time which could be either the far right or the far left so is that sort of a
#
and obviously forgive me if it's a new question but can you give me an example of is there
#
an actual instance where in this context of social media causing behavior you figured
#
out that a specific set of algorithms actually caused a specific kind of problem in the real
#
So there were at least two things you were alluding to in your question one was the difficulty
#
of doing the science itself and the fact that you may not find anything significant once
#
you do the science that is the causality may be hard to establish there may be considerable
#
uncertainty associated with the conclusions you're drawing even if you find some effect
#
there's still what I call variance around it and uncertainty and that's certainly true
#
and I guess in a sense I'm sort of assuming that we can solve that problem that in fact
#
if there is no demonstration of causality and no statistically significant relationship
#
between A and B well then that should not be used right then it's telling you that
#
there's nothing to worry about right so what I'm assuming is that we first do find things
#
that we have to worry about right and we're finding those already right and that's the
#
last two years have revealed all some of the things that we should be worried about but
#
probably not all of them right so to me transparency is about achieving awareness because once you're
#
aware of something you can act more intelligently right so if they're aware of the fact that
#
let's say certain algorithms are leading to teen depression right that there's a high
#
association between those two well then maybe we can start taking mitigating steps like
#
making teens more aware of how these algorithms might be influencing them right but if I might
#
interrupt you algorithm such as what like I'm just having difficulty conceiving of an
#
algorithm that specifically causes say teen depression rates to rise apart from the very
#
general ones of bringing like-minded people together so you know let's say that I come
#
up with an algorithm that finds a way of connecting you know various kinds of teens together that
#
increases overall engagement among them but also increases discord among them which you
#
can actually see in the data you can look at what they're saying right and I'm by the
#
way I'm making this up I'm not saying that this is actually happening right so platforms
#
doing something and we observe that it's actually leading to more engagement people are talking
#
more to each other but you know what this talking is actually not good talk it's bad
#
talk and you know and that bad talk is you know correlated with you know depressions
#
and suicides in that zip code right in that area right now let's say the science was done
#
really well and it's showing you this well are you going to ignore it right well you
#
could but it's probably not a good idea to ignore it right because there's something
#
going on that you should probably do something about right and you know this is something
#
you may not have been even aware of earlier right I mean how do you know this is going
#
on if you don't even have the data but if I have sufficient evidence to know that something
#
is going on among the users of that platform that's leading to some very undesirable societal
#
outcome then I should figure out what I can do about it right it's that's all and so if
#
but if there's nothing well then I shouldn't do anything about it right so I'm assuming
#
that the science has been done you know notwithstanding the messiness of it right sometimes the data
#
is hard it's noisy you know we're used to dealing with that you know so so yeah we can
#
deal with that but presumably we can still do the science well and establish either causality
#
or a strong association that may suggest causality that's worth exploring right let's sort of
#
move on to how different regimes across the world are actually sort of handling this problem
#
like in a piece that you wrote a headline in the AI era privacy and democracy are in
#
peril you wrote quote I recommend regulating the use of personal data for prediction products
#
I also propose classifying certain platforms as digital utilities that aim to maximize
#
public benefit and spur economic growth much like the interstate highway system and the
#
information super highway have done for physical and electronic commerce stop quote and then
#
you talk about four major models of data use which is at the extremes US and China and
#
then EU and India as well so can you talk a bit about these four different approaches
#
where they come from how you know effective they are and what do you feel closest to in
#
terms of actually solving the problem so they all have their strengths and weaknesses right
#
the strength of the US model is that it's free right it's it's it's free of like almost
#
like you know any kind of regulation at this point and so it's grown up sort of organically
#
right this is the way that the internet has developed in the US and people would argue
#
that there have been lots of good things that have happened that all kinds of really cool
#
stuff has emerged that may not have you know if he hadn't given these internet companies
#
in the platform complete freedom and absolve them from lawsuits and responsibilities and
#
things like that so that's the beauty of the US model but we've seen the problems with
#
it emerge recently that these platforms have become so powerful and and they've overreached
#
so much in terms of data that they're like messing some things up right so and that's
#
why all the scrutiny around the internet models in the US Europe not surprisingly has always
#
been you know for accountability right so so their approach to it has been sort of responsible
#
data use right that you can use data if it's essential for what you're trying to do right
#
which also makes sense because they're trying to cut down the risks associated with this
#
sort of unbridled wild west use of data so have some provide some kind of accountability
#
India is sort of a really interesting case for a number of reasons so it has sort of
#
this late mover advantage and it also has sort of a different kind of advantage which
#
is that I think the people that are sort of influential in tech in India have sort of
#
more of a social approach to infrastructure and there's this notion of sort of public
#
digital infrastructure not surprisingly has emerged you know mostly you know out of India
#
and it actually happened maybe partly by accident you know partly it was serendipity you know
#
that we decided to implement the Aadhaar platform right which was for authentication and to
#
actually bring people who didn't have an identity prior to that into the digital mainstream
#
right so now I know there's you know controversy in India about its misuse and all that kind
#
of stuff but I think it'd be silly to you know not acknowledge the tremendous benefits
#
that such a platform can bring to society right so we built that and then it seems to
#
work right it has you know 1.2 billion users and it's you know very widely used for a number
#
of things and now we're thinking oh maybe we can actually build something on top of
#
this authentication layer where we can have you know and in India they call it the India
#
stack where we can actually stack sort of layers of utility so I look at Aadhaar as
#
a utility that's essentially what it is it just does one thing you know are you who you
#
say you are and it answers that question yes or no simple it doesn't it hasn't spilled
#
over into other things like it's not checking your credit scores or anything like that it's
#
just simple one function thing in mind and now we're thinking oh we can layer a system
#
on top of that that since someone can authenticate you and they know who you are you can then
#
say oh yeah you I want to share my transcripts of the university over the school or I want
#
to share my credit history with the bank for a certain amount of time I like the thinking
#
around it because it's a utility not unlike sort of the the internet right not not not
#
unlike you know the highway where you're sort of trying to lay the rails for a utility that
#
will be used like crazy right where you don't actually want the utility provider to be making
#
a you know a ton of money off of it because it's a utility should be priced like electricity
#
or water it's a basic thing that everyone engages in like moving money around you know
#
like it's a utility there's so much of it happening it should happen almost costlessly
#
you know because after all it's just a ledger entry right it's not like you have a pony
#
express moving money from A to B that costs money it is costless and so it should be costless
#
it should be a utility so I like the thinking in India around this which is around data
#
that is personal non-personal right non-personal becomes community data right so maybe you
#
know if you and I go to the hospital maybe that data can be anonymized you know and it
#
becomes sort of a community data set but you can say that you know people over 50 you have
#
this who are exposed to this are at risk and therefore we can craft some health policy
#
around that based on community data right now this thinking is in its early stages right
#
I've read the reports that have come out around this they're interesting but they have all
#
kinds of holes in them that haven't really been thought out but that's okay you know
#
we can we can sort those out so I like the the thinking in India around this sort of
#
public digital infrastructure and building layers on top of that where people are in
#
control and consent to have that data used or you know for a certain amount of time and
#
for a specific purpose to me that's sort of the best of both worlds where users are aware
#
and in charge and can control data right in the US it's become kind of we've lost control
#
over that right people have lost control over where their data is who's using it for what
#
what what can they infer from it the Chinese model is fascinating and worrisome at the
#
same time right it's fascinating because they've done amazing things with it right they've
#
exercised control and you know in a way that's been quite effective presumably for things
#
like covert to the extent that you can trust information coming out of China right they've
#
been quite effective at dealing with that they've been quite effective at gaining efficiencies
#
by having data centralized but you know there's a nasty side to it right you know we read
#
these excesses you know in Xinjiang against the Uyghurs and you know and god knows what
#
else is there right dissidents are like rounded up and threatened and things like that right
#
so there's this sort of very dark side of that centralized model that really sort of
#
bothers me and it concerns me when people project that as kind of the better model for
#
internet governance because you know knowing human history we should be worried right i
#
mean i don't trust any institution fully like any kind of government you know yeah for the
#
most part i you know i for the most part i trust the u.s government but they routinely
#
violate their laws well i don't know how routinely but you know the example that comes to mind
#
is that you know who was that swami from uh puna who moved to antelope oregon uh oh show
#
rajneesh yeah raja oh show right i mean oh show like you know the u.s government essentially
#
decided you know i mean the one thing you do is you can't mess with the u.s government
#
sort of openly and blatantly they'll get you right it's a free country it's a democracy
#
you have rights but you know if you overstep they'll get you and that's what the u.s government
#
did you know they violated their own laws to get him and and basically set an example
#
yeah so that's what bothers me about the chinese model is that yes i buy the tremendous benefits
#
of efficiency and and all the good things that come with centralization and control
#
right that you didn't have in the old style centralized regimes communist regimes right
#
uh where there was no information and so that was a terrible model a governance model in
#
terms of getting anything done right and we saw that in russia saw that in china like
#
just bad governance models whereas now they're like swimming in data and saying wow like
#
you know we can do our job so much better if only we had that in the old days maybe
#
would have worked then so there's this for lack of better word assumption that some people
#
make that you know maybe that is a superior you know the central model is a superior governance
#
model because it leads to efficiencies but it is scary to me i think you know if again
#
if i may think aloud and i'll lead up to a like a broader question but it seems to me
#
that the chinese model in some senses is what the indian state would like its model to be
#
except that it isn't efficient enough like i'll go back to a keyword that you used when
#
he was speaking about the indian model which is consent and my issue with adhar has always
#
been you know i don't want to enter the debate so what kind of system is it but my problem
#
with adhar always was an imposition of adhar the fact that it was an act of state coercion
#
and you know which because there your consent goes out of the window and and that kind of
#
leads me to the sort of the hypothetical question that a couple of hypothetical questions and
#
again it's a thought experiment assume that there is a state that says that i have access
#
to all the data that there possibly is including even what you're thinking and therefore that
#
can lead to the best possible outcomes for society if you just submit to our will and
#
we'll take care of everything even if it maximizes happiness is that necessarily the right thing
#
to do and to take it even further if a state was to say that listen we figured out the
#
tech we figured out the data of how your neurons work we'll plug electrodes into the brains
#
of every individual and we'll make sure that you're permanently in a state of the highest
#
possible happiness is that something that you would be comfortable with because i would
#
certainly not obviously i would say that and you know the one thing that should be sacred
#
in all of this is individual autonomy and therefore consent and sometimes we get carried
#
away with the power of data and the seductive allure of a technology like adhar which can
#
do all of these things and we forget the basic core principles that make a democracy a democracy
#
which is not just the fact that you have you know elections happening but also that individual
#
rights are sort of taken care of you know and if you have that mentality that consent
#
doesn't matter adhar is for everybody's good then that mentality creeps over into what
#
is chinese style authoritarianism except that i don't think it will happen in india because
#
our state is simply too incompetent what are your thoughts on that there was so much stuff
#
i wanted to say but i'll bring it back to what you started with which is you know like
#
how to think and what a good theory is right and a good theory is one that makes fewest
#
unreasonable assumptions right and so the i would turn the question around and ask yourself
#
this which is like this scenario which you painted this utopian scenario dystopian well
#
no but utopian in quotes which is that your happiness is always maximized believe me you
#
know if i know what you're thinking i'll feed you that right amount of dopamine and you'll
#
be in bliss forever so what are you complaining about right so the question i would ask is
#
like what what assumptions is that theory hinged on right are you familiar are you comfortable
#
with the assumptions on which that theory is based and i would wager that you will not
#
be comfortable with those assumptions and therefore you'll say that's a terrible theory
#
and i would concur with that right because right off the top of my head the assumptions
#
it makes is that number one that's even possible to do right that humans don't have any free
#
will and that i'll just tell you what you want right so that's an that's an assumption
#
perhaps i'm just thinking aloud here maybe the other assumption is that the state will
#
always act in the best interest and will never make a mistake again huge assumption because
#
states make mistakes my assumption is the opposite of that but continue i'll i'll respond
#
to all of this but but the reason i'm saying that that's like a dystopian and not a utopian
#
future is exactly because i don't buy the assumptions that it's predicated on so i'll
#
lay out my assumptions so that my core assumption here is that the brain is a machine that we
#
haven't figured out yet but we will one day now when will we i don't know but we will
#
surely figure it out one day it could be a century down the line if we survive that long
#
it could be a few decades you never know but it is a machine it is figure out able we haven't
#
done it yet someday we will maybe some superior artificial general intelligence will do it
#
for us instead and use it to control us that's irrelevant secondly i believe the state is
#
always and without exception a malign entity that its only interest is towards itself it
#
is predatory and parasitic although its purpose the purpose of its existence is the opposite
#
to safeguard individual rights but it will always end up doing the opposite and therefore
#
we must show what jefferson called eternal vigilance which is why you know again if you
#
ignore the assumptions i mean a thought experiment doesn't always i think have to have waterproof
#
assumptions but core point that i'm getting at here is that to me individual autonomy
#
is sacred that even if it was possible for someone to plant electrodes in my head and
#
keep me in a state of bliss all the time i would not want that because i would feel at
#
an attack on my autonomy and i would never consent to it and yet there is a way of thinking
#
in the world today that can look consent doesn't matter the state will do what is good for
#
the people and the imposition of adhar is an example of that what china is doing is
#
another example of that but who makes this assumption that the state is good for the
#
people other than let's say the state itself i mean but well the state itself but who cares
#
about that right i mean that that that's the the state itself i don't even know whether
#
all states really believe that i mean maybe the indian state believes that but uh i certainly
#
don't think most americans trust the government you know i mean in fact you know while we're
#
talking about it one of the things that people all over the world always wonder about americans
#
is like why the house were hung up about guns right guns kill people that there should be
#
gun control right this is like senseless right and i totally get it right but what they don't
#
get about america is that that's exactly what you're getting at people don't trust the government
#
right so when people say oh we shouldn't have guns well that assumption is based on trusting
#
the government when people say that they trust their government 100 well 100 but a lot right
#
so that statement says i trust the state i don't need to protect myself right in the
#
u.s. it's very different it's very difficult for the rest of the world to understand this
#
right that it's based on this assumption that hey you know the state might start doing some
#
shady stuff right we don't completely trust the state and that's what i love about america
#
right it is you know this society that basically says no like we shouldn't trust the government
#
you know that we should have checks and balances and the government can tell us what the hell
#
it wants about how this will be good for everyone no thank you right i want my individual freedom
#
right unfortunately what's happened is that you know this third actor namely digital platforms
#
social media platforms have come into the picture right so orwell sort of didn't quite
#
have it right right it's not your danger isn't from the government necessarily right it's
#
actually from these private entities right that that have in some ways become even more
#
powerful than the government right so it's not the government that's necessarily the
#
bad person anymore we may not trust them but we need them and we need to somehow work with
#
them right you know to create this better future so i don't know if i answered your
#
question i mean i went around it in in several ways but i i think we're on the same wavelength
#
and as far as people maintain that skepticism about institutions which they should we need
#
to craft policy with that in mind the fact that we cannot trust any institution completely
#
and therefore we need these checks and balances and some of these solutions by the way will
#
come from technology itself right technology is advancing right we're making tremendous
#
strides in encryption and you know new kinds of technologies that might actually work towards
#
you know preserving privacy remember that we're always working with old technologies
#
that were adapting to new phenomena that we're confronting and you know and so we're dealing
#
with a whole new world we're using old technology the optimist in me says that the technology
#
will itself provide some of those solutions that preserve this delicate balance we need
#
in a healthy democracy you know between individuals and state overreach and corporate overreach
#
and these are the three sort of legs of a healthy democracy and we need these checks
#
and balances you know between them you know i mean raghu ram rajan has this great book
#
you know where he talks about these institutions and i'm looking at the world the same way
#
but in terms of sort of the information and control and checks and balances on each other
#
you know in this new era of data and ai that's fascinating and insightful but you didn't
#
answer my question but i'll take you back to that and ask a more pointed version of
#
it but before that i'll actually take issue with what you said about america's distrust
#
of the state i think a lot of that is rhetorical because even if uh you know some of them want
#
to have guns all of them do trust the currency which is given out by the state it's not like
#
everyone is stashing bitcoin or alternative currencies the way they are stashing guns
#
at home or they're using the barter system so there's an inherent trust of the state
#
and everything they do and in fact uh you know if you just look at the politics you
#
know both parties just want to make government bigger and bigger and as far as india is concerned
#
i've often said that our biggest religion is not hinduism it's a religion of the state
#
that for every single problem we always look to the government for a solution which is
#
a bit of a paradox because you know everybody accepts that the government is dysfunctional
#
and can do nothing properly and yet for every problem the government is suddenly a solution
#
but to get back to my question my my question was really about the balance between uh individual
#
autonomy and state coercion for supposedly uh good ends and the position that i took
#
there was on the side of individual autonomy and saying that there is a line you cannot
#
cross there no matter how good the ends might be uh and and to me which is why i oppose
#
the imposition of adhar that i don't even want to enter the debate about the technology
#
whether it's good or it's bad the point is it was imposed on us and that is a problem
#
to me so what is your feeling of that my feeling on that is that even though it was imposed
#
it was probably a good thing so there are certain things on which we just can't run
#
a counterfactual experiment right so it would have been great if we had had a parallel india
#
without adhar and seen you know how things have proceeded there right uh i suspect that
#
the difference would have been that after adhar a lot of people got identity and so
#
what are the benefits of you know 600 million people suddenly getting identity right can
#
you calculate the benefit associated with that versus the cost of imposition and my
#
intuition tells me and this i don't have no data to prove this that that was actually
#
a good bargain that is it's probably in the aggregate been a good thing even though it
#
was imposed now the reason i say it was a good thing even though it was imposed is because
#
of its scope right the trouble with surveillance systems is that they have no they have no
#
defined scope very often right i was watching the movie snowden last night right since we're
#
talking about state overreach right and that was his problem with the cia you know and
#
the nsa is that they're just like collecting data on everyone right phone records you know
#
to verizon they were like just tapped into verizon servers right but that's not right
#
i mean i mean you know there's got to be you know like who agreed to that like you know
#
do the people agree to that no should they have been consulted for something like that
#
maybe yes right now in the movie snowden you know the scene there where i forget the uh
#
the guy who's he's talking about you know the fact that there are some things you need
#
to do that you can't broadcast to the rest of the world because the whole point of doing
#
them is that you need some sort of secrecy right uh for those things and so a state has
#
to sort of make that decision that in this particular case i think we need to keep this
#
under wraps and so yeah the court orders will be issued by you know judges that are you
#
know not in the public sphere that this is done by secret courts and it's an unusual
#
situation so the state reserves the right to do something like that right it's a very
#
delicate question as to you know what is that situation where it's okay to assume that power
#
because you're trying to address a specific question that's posing a risk or a threat
#
to society right so in the snowden case he said look i i just felt that it was morally
#
wrong to be collecting data without any kind of purpose right with you know just in a completely
#
unbridled way and and therefore he decided to reveal this and become a whistleblower
#
but that's a tricky question to answer and since we started with adar you know the the
#
nice thing about that is that it doesn't have this sort of scope creep it's designed for
#
something very specific and people even complain that it shouldn't be used as an identity card
#
to you know shouldn't be a required form of identification for non-government kinds of
#
purposes which also makes sense to me because that's not what it was designed for right
#
so that was an example of creep and the indian court struck that down correctly in my view
#
right that it should be used for the purpose for which it's designed and as long as that's
#
the case i feel you know comfortable that it was yes it was imposed but it was imposed
#
with a particular goal in mind and the question is you know was that achieved and now we can
#
argue about the extent to which was achieved unfortunately we don't have the counterfactual
#
we can't do an ab test and show it was better but my sense is that it's led to a lot of
#
good things and that on balance has probably been a good thing for india i won't i won't
#
litigate this further let's let's kind of move on and let's move on to another fascinating
#
question that you have answered and that your personal experiences have kind of a lot to
#
do with like in the 90s one of the things that you did which fascinates me and i want
#
to know more about that is you started a hedge fund called sct capital where you used uh you
#
know artificial intelligence to make investments and you've spoken in the past about how that
#
experience over the next few years running that hedge fund and running ai to uh you know
#
invest in markets gave you a insight into that larger question on which you've spoken
#
uh at great length which is when should we trust machines and and and you know one of
#
the interesting points that you made about that period of time when you were running
#
that hedge fund was that when you often compared what the algorithms were doing compared to
#
you know what you guys were doing the algorithms would often outperform you so one of the very
#
early learnings from that is that humans are fallible and more fallible perhaps than humans
#
realize in fact there's a lovely quote by you which is so memorable where you said that
#
quote models of men tend to be better than men stop quote which i absolutely love so
#
now my question is given this given that humans are fallible given that artificial intelligence
#
can do all these things like not get lost in the neighborhood thanks to gps that given
#
that ai can do all of these things in practically every domain in which we are interested uh
#
you know when should we trust machines how should we think about ai what are we looking
#
for when we see decision making to ai for example yeah so you know that's a fascinating
#
question and sort of at the heart of one of my core questions which is you know when is
#
x plus y better than x where x is machine y is human right so when is human plus machine
#
better than human right to me that's a fascinating question and you know Kasparov uh did this
#
experiment with chess you know we keep coming back to chess you know where he said um humans
#
plus machines outperform machines right which makes a lot of sense to me you know for chess
#
uh because you know human grandmasters have a tremendous uh knowledge about the game of
#
chess uh tremendous experience and now they're uh working with a tool that is in some ways
#
even more amazing than them right that can search the space in ways that they can't even
#
think of come up with moves and then they can say uh yeah like that's a good move or
#
no i think i have a better one right but if i've got a tool that's like a really good
#
machine then as a human i'll probably just let the machine do most of the stuff but occasionally
#
i might do something as a human right uh but i'm willing to believe that for for something
#
like chess that x plus y is better than x if i might say so if i might interject there
#
uh Kasparov was right then but i don't think even he would believe that now because now
#
it is a case that y is better than that x doesn't matter y is the best because now machines
#
on their own would just wipe out any uh combination of human and machine because i think the human
#
would just get in the way right right so he was right then he was right then so so in
#
the degenerate case the human does nothing right so so so in a sense human plus machine
#
will not be worse than the machine as long as the human just keeps out of the way right
#
uh uh but but yeah i was willing to believe it then right but the reason it didn't Kasparov's
#
assertion didn't resonate with me was because 20 years ago i'd done a similar experiment
#
in finance right so we used to run a machine a different machine than what we run now and
#
i did an experiment i persuaded the person at deutscher bank who was allocating capital
#
to let me do an experiment to see if we could actually do better right because very often
#
we would say things like oh the machine wants to buy bonds tomorrow and we know that the
#
there's a meeting of the fomc and they're going to raise rates like what the hell this
#
makes no sense they're going to raise rates and it wants to buy bonds tomorrow should
#
be selling so we did an experiment there were six of us where we had a little budget where
#
we could actually override the machine's decisions and me being the head of the group i was you
#
know i don't look at the markets you know on on a granular basis i look at them occasionally
#
just to see what's going on but i don't really look at them because it's just noise to me
#
you know whereas my trader i had a human trader and she was the first to go face down you
#
know so she's watching the markets all day and so she would intervene more often and
#
essentially the results of my experiment were that we all did worse than the machine now
#
it doesn't necessarily have to be the case always but to me that was an interesting lesson
#
which was like don't mess with the machine you're just going to do worse and certainly
#
most of the experiences i've had since then right when you know we've cut risk or so that's
#
stuff like that for reasons maybe the client wants to cut risk or you just think that you
#
know the market's riskier than you think it invariably turns out to be you know the case
#
that you have two left feet so to me the interesting question isn't whether humans always do better
#
than machines but when they do better right and to me that's one of the open questions
#
and i often assert that in finance which is such a noisy kind of space that you know you
#
as a human just mess with it you know if you trust the math and you trust the statistics
#
then just stay with it because that's what you've designed and now you're just trying
#
to get cute and think you're smarter than the machine which is actually yeah you're
#
an intelligent person but that doesn't say anything about your ability to forecast markets
#
right so for a problem like that i'd say i'm not willing to believe Kasparov's assertion
#
but it's something like health care right like let's say cancer which is a fascinating
#
area now because machines are becoming so much better at seeing right to imaging i'm
#
not willing to sort of throw my life in the hands of a machine just yet right because
#
i don't think that machines and health care are sufficiently intelligent that i should
#
trust them i still want that human oversight i still want that expert who has tons of experience
#
and even though that expert might have looked at i don't know 3 000 images in their lifetime
#
as opposed to the computer that's seen three million and therefore has an advantage in
#
terms of how it looks at images there is that gestalt that humans make for lack of a better
#
word that lets them do better than just what the data is telling them it lets them invoke
#
experiences and say oh yeah this is you know all this is good but i i remember seeing these
#
cases you know two years ago where something just didn't quite fit and therefore i want
#
to do another test or i want to probe further you know i'm not willing to trust the machine
#
on this one right so in health care we're still in that phase where we need humans you
#
know for the most part we're not willing to trust the machine same thing with driverless
#
cars you know i told you like you know machines actually see images better than humans do
#
you know that they're better at recognition and in many ways that humans are but humans
#
just had that ability right so you know when i was talking to that individual the scientist
#
from one of these driverless car companies who told me you'll die five six times a year
#
if you let it go on autopilot what he told me was that it's not that the machine doesn't
#
see as well as humans it actually does but humans just invoke this other level of intelligence
#
when they're driving right that if you see a shadow move in the periphery of your eye
#
you know you recognize hey maybe that's an animal you know maybe that's another car that
#
i haven't seen yet or something like that right you just sort of have this alarm that
#
goes off in your head that driverless cars don't at the moment right i mean i went through
#
this driverless car experience a couple of years ago where on certain turns it would
#
like go really fast and then suddenly stop and it's all car in front of it whereas i
#
as a human you know i'm taking the turn anticipating that i might actually see something in front
#
of me so i'm taking it slower right so there's all this human intelligence that we have that
#
we use to our advantage um and even though our you know machinery out there in terms
#
of sort of processing power is a fraction of what computers have somehow it enables
#
us to do some pretty amazing stuff you know where we invoke this deeper level of the model
#
of the world that we have on demand when we need to you know that machines are unable
#
to do at the moment right and so that's why we don't really trust them in those situations
#
because the cost of error is too high right the cost of error is low that we trust them
#
right so in the hedge fund example you know if i can keep the risk associated with every
#
position low then even if that position blows up i don't really care i only had a little
#
bit of risk allocated to it and and so that's what i realized that you know in the finance
#
world even though the machine is wrong almost half the time right the error consequences
#
are smaller so i'm willing to believe it and trust the math but whereas in healthcare you
#
know i might die or you know a driverless car might make a mistake and that error is
#
costly in fact i'm dealing with a situation you know where a good friend of mine actually
#
has you know is in an advanced stage of a certain kind of cancer it's quite likely it's
#
because it was diagnosed as a false negative in 2014 you know i remember this you know
#
because i was you know at a famous hospital in in india which i will not name you know
#
but they said yeah you're fine right and you know and this cancer spread and it was a high
#
cost of error it's a really costly false negative right now in this case the false negative
#
was actually a human right but we excuse humans for these mistakes right we're not perfect
#
and we excuse humans for these kinds of mistakes even when they're severe right will we excuse
#
machines for the similar kinds of mistakes no we subject machines to a much higher standard
#
when it comes to these kinds of decisions right and yes machines will make mistakes
#
as well but we want to make sure that they don't happen very often and the consequences
#
are not severe so that's how i look at the world of trust with machines and at the moment
#
and so it's a fascinating question as to you know when we trust machines when we don't
#
trust machines and we're in a state of flux right now where machines are getting better
#
but in general the tendency is towards trusting machines more with problems as we get more
#
data as we get more comfortable with their prowess in terms of predictability and as
#
we get more comfortable that they're not going to make these errors that will just kill us
#
or cause you know really severe damage in something right that's the thing that we are
#
aware of that determine how much we let the machine control decision making in fact i
#
was quite taken by a term that uh you know you're used in this regard i think what you
#
call the predictability spectrum and and that took me back to another mind sport that this
#
time i actually played professionally for a while which is poker where you typically
#
calculate the expected value or the ev of every decision which would depend on the frequency
#
of something happening on one axis and on the other axis the cost and therefore you
#
determine the ev and you take the action and what you are sort of postulating as far as
#
trust in machines is concerned is that for one you have to think about how predictable
#
it is but the other you also think about you know the cost of going wrong for example you
#
point out that uh you know autonomous cars might be much more predictable and reliable
#
than say day trading machines but the mistakes that a day trading machine can make can be
#
said to be much smaller and can can be limited whereas an autonomous car failing could just
#
run over five kids near a school and the you know that cost is huge which kind of makes
#
a lot of sense to me that kind of spectrum that as you move towards greater reliability
#
and the lower cost and the ev so to say of the wrong decision kind of goes down or the
#
expected cost then you can you know your trust can go up accordingly except that's a rational
#
way of thinking about it and what many humans tend to do when it comes to ai is also react
#
at a psychological level for example if autonomous cars one day reach the level which they might
#
already have i'm not sure what the state of the art is but if they reach a level where
#
autonomous cars would lead to uh one tenth of the fatalities in a given year as is the
#
case now people would still object to it because that one tenth would be the scene effect and
#
the you know the 10 times uh all the people who haven't died would be unseen and therefore
#
you'd build stories around uh the people who have died that so and so was outside of
#
school and a car went over him it would be hard to make the aggregate argument that look
#
overall we are all much safer because of it and people often you know expect technology
#
to be a panacea like i remember back in i used to be a cricket journalist as well and
#
i was one of the earliest uh proponents of hawkeye the technology which came in in the
#
early 2000s and the use of technology as an aid to umpires and the arguments against were
#
always were that they weren't perfect and my point was they don't have to be perfect
#
if they can take your decision making accuracy from you know umpire alone being right 93
#
percent of the time to umpire plus tech being right 97 percent of the time that's fine with
#
me but it's a human tendency to focus on the three percent that goes wrong the kid who
#
dies in front of a school because a tesla went out of control or whatever uh so you
#
know what do you feel about this aspect of it and the way that we react with suspicion
#
to machines you know you said it really well uh you talked about expected value which is
#
kind of when you put it all together you know what's your expected value of uh accidents
#
right did it go from seven percent to one percent that's great right so on that metric
#
you're doing really well um the other thing you said and i don't know whether to use the
#
term was that that worst case right and so that's what i try and bring attention to that
#
it's not the expected value alone that matters it does matter right you do want a better
#
expected value but what you care much more about is that worst case scenario right that's
#
what you really care about because that can like wipe you out right and you're not willing
#
to do that right so in our case you know we don't know what these worst case scenarios
#
are yet with driverless cars which is why we have this feeling of real discomfort right
#
and so we're trying them out in the wild and all of that you know personally i feel that
#
the leader in driverless cars will probably be amazon because you know i think that you
#
know deliveries will get automated first right because the cost of error there is lower right
#
you don't have humans sitting around so you you know bash up a little vehicle that's moving
#
at 10 miles an hour and the groceries that were in it not a big deal right so i suspect
#
that we will learn about these errors of autonomous vehicles by actually putting them in the wild
#
and we're not going to put them in the wild right away you know like driving at you know
#
70 miles an hour on the highway and you know aggressively in cities and stuff like that
#
that that you know that ain't happening right what we're going to see is a much more cautious
#
introduction and my prediction is that you know if you were asking who do you think will
#
be the driver the winner i think it'll be amazon they've already acquired a you know
#
autonomous vehicle maker you know quietly and you know with uh you know by the way with the pandemic
#
i see these guys delivering groceries in manhattan you know on on bicycles right you see these guys
#
you know with trays behind them and they're delivering groceries all over manhattan right
#
i can just imagine a little vehicle you know juggling along at 10 miles an hour that you know
#
sends you an sms on your phone saying hey i'm going to be at uh you know bleaker and laguardia
#
at 1033 or at some designated spot where you you pick up your stuff you know and you go pick it up
#
right that kind of scenario is much more likely to happen where we sort of gradually get comfortable
#
with the cost of errors and now an insurance industry can develop around it right but you
#
have an idea right because you know insurance would come out of the blue it developed because we
#
have actuaries that count how many accidents happen and what the severity is you know and
#
you know in the u.s we have these laws around auto insurance which are no fault laws many states have
#
that where you know you have to settle up regardless of who's at fault and and these laws are
#
are sort of really retarded when you think of them because they're based on the on the assumption of
#
no data being available right that it's impossible to establish fault right two years ago i was
#
ago i was sitting in my car in union square and some escalator came and like slammed into me right
#
i thought there was an explosion right and he said oh sorry man i fell asleep right and you know the
#
cops came like a whole new like whole case around it but i thought to myself that and and by the way
#
his insurance company didn't pay up right because they couldn't prove conclusively that it was his
#
fault right now imagine in today's era right you know you got a sensor inside the car outside the
#
car like everything is visible right if you could establish fault unequivocally would you still need
#
no fault laws no you know exactly who committed the error right laws will change right so a lot
#
of our assumptions a lot of our laws a lot of our practices are outdated and they're based on
#
no data being available when we start getting data we'll move towards a more rational and more
#
efficient situation all around where things become evidence-based so yeah driverless cars right now
#
too high cost of error but over time we'll reduce that cost of error because we'll know the kinds of
#
errors that they make we'll correct them to the extent possible and we're going to be left with
#
some residual errors that we'll be comfortable with and it'll be a brainless proposition right
#
where deaths go from you know four million to forty you know i mean something that's stark
#
and you know we're in a situation where even when those forty things happened they weren't crazy
#
violations and they can be covered you know comfortably by insurance because we now know
#
so much more than we do now i think jane and gene must be really chilling listening to this because
#
they're like these two guys are just you know just talking randomly it's no threat to us
#
i've taken a lot of your time so i'm going to kind of go with three final lines of
#
inquiry though all of them are kind of broad and could go into interesting directions but here's
#
the first of them and it comes from a stephen hawking quote and again i discovered the quote
#
through one of your articles and it leads to my larger question and his quote is whereas a short
#
term impact of ai depends on who controls it the long-term impact depends on whether it can be
#
controlled at all stop quote and this leads me to the paperclip problem of nick bostrom which
#
for the sake of my listeners i'll quickly read out nick bostrom's description of the paperclip
#
problem where he says quote suppose we have an ai whose only goal is to make as many paper clips
#
as possible the ai will realize quickly that it would be much better if there were no humans
#
because humans might decide to switch it off because if humans do so there would be fewer
#
paper clips also human bodies contain a lot of atoms that could be made into paper clips
#
the future that the ai would be trying to gear towards would be one in which there were a lot
#
of paper clips but no humans stop quote which is quite a delightful uh vision of an interesting
#
sort of dystopian future but a lot of people have recently expressed their worries about ai
#
becoming powerful and almost in a sense becoming sentient and having interests of his own which
#
it will of course rationally pursue and where that leaves us what do you feel about this i mean do
#
you think that ai is not just an aid to making our lives better but also a threat to us in
#
direct ways in and of themselves not just in how they uh you know accentuate the worst aspects of
#
humanity oh big big big question this one um and uh both great examples right hawking was obviously
#
concerned about whether technology can be controlled at all and it's a great question
#
right whether ai is controllable at all and you know next example points to
#
a larger problem that has also been discussed i mean there are these things called asimov's laws
#
you know which uh haven't uh you know which are like really interesting that have to do with you
#
know robots cannot you know harm a human right and yeah but when you think about it there are
#
all kinds of problems with you know asimov's laws which is you know whatever causes like a little
#
harm in the short term but longer term it leads to tremendous benefits right so like so the asimov's
#
laws are sort of somewhat simplistic in that sense the harm like what do you mean by harm like you
#
know because you know tough love can be viewed as harm too right i'm going to be tough on you you
#
know but it's out of love because believe me in the longer run things will be a lot better for you
#
right and asimov's laws sort of ignore tough love and and those kinds of situations and you know
#
nick's example actually is somewhat related to my example earlier on where i was talking about
#
the fact that these ai machines are driven by objective functions you know and those objective
#
functions can lead to behavior that has unintended consequences right you say well you know i didn't
#
realize that this was going to happen and so there was an unintended consequence you know now being
#
the optimist that i am i believe that you know human beings are smart enough and that we will put
#
you know adequate safeguards into systems where we let machines make decisions right now i say
#
that with complete humility because i realize that you know a machine might unknown to humans
#
decide to do something that we had not envisioned and one of those things would be that it just
#
disables the switch right for whatever reason right and that's the kind of dystopian scenario
#
that i worry about and i really don't have an answer to you know so so you know being an optimist
#
i believe yeah we'll go we'll figure these things out right we'll put adequate safeguards around it
#
you know we'll make sure that you know this thing just doesn't run amok but the larger
#
philosophical question remains and i don't have an answer to that i wish i did which is that yeah
#
you do get a machine that becomes sentient sufficiently sentient and that it disables the
#
switch and by the way this you know i don't know about you but sometimes i press the off switch
#
on my computer and it doesn't respond you know it just doesn't turn itself off and those situations
#
always remind me of this dystopian scenario that you just painted which is sorry pal like
#
yeah i know you thought you had control but you know i took charge a few years ago of that switch
#
and uh you know i ain't letting go uh this is really scary because now i don't know whether
#
i'm in a conversation with vasanth dhar or vasanth dhar's computer who's really in charge here uh my
#
next sort of broad question here actually follows on from this which regards um you know what you
#
guys call artificial general intelligence now for the benefit of my listeners a quick sort of primer
#
that artificial special intelligence or narrow ai as it is also called and vasanth can correct
#
me if i'm using these terms in a wrong way would refer to narrow functions in which computers have
#
already surpassed us for example a calculator will calculate numbers much better than i can
#
or the gps app on my phone will uh you know be able to find its way around the city much better
#
than i can and so on so in terms of asi or artificial special intelligence or narrow ai
#
we are way behind now there is also talk by ai experts of what is artificial general intelligence
#
which is almost like a holy grail which is uh i suppose an analog of human consciousness that
#
when a machine gets that kind of self-awareness that you can say it has something akin or analogous
#
to human consciousness in which case it immediately becomes superhuman because in all of those
#
specialized functions it is already way ahead of us so my questions would be that one how far away
#
do you think this is uh from happening two is it something that we should be worried about and three
#
the interesting ethical question that then comes up is that once a machine achieves a gi and is
#
therefore clearly superior to us in that aspect and we are just sort of you know a moist machine
#
as the phrase goes you know then what gives us ethical superiority over the machine like what
#
is to stop that machine from saying that i will not be your slave that is ethically wrong i am
#
superior to you you will be my slave and is there a plausible case we can make against that does
#
our special status as ethical beings then rest on not our sort of cognitive faculties or our
#
consciousness of ourselves but on merely being flesh and blood is this stuff you've thought about
#
yeah i have it's a fascinating question and and you know i was thinking of something that you said
#
which is consciousness right and you know one of the sort of philosophical questions that some of
#
my philosopher colleagues much better to ask is like you know can machines achieve consciousness
#
i don't know the answer to that question if they can achieve consciousness then you're right right
#
then you have like this conscious entity that's clearly superior in that you know it knows the
#
math of things better than we do it does number crunching better than we do and it also has this
#
general intelligence so yeah in a scenario where machines achieve consciousness you know i could
#
see that as being a relevant question i just don't know whether machines can achieve consciousness
#
and what that even means uh with regard to your slightly easier question or easier part of the
#
question which is around artificial general intelligence right so agi is like a construct
#
we've created to distinguish like intelligence that's of a more general type than like specific
#
problem solving kinds of capabilities and it has several forms but to me the most important form
#
of it is common sense right that is much of human agi if you want to call it that like if you want
#
to like just put a simple term on agi it's common sense right that is we just have a lot of common
#
sense right now how do we learn that well that's a very fertile area of ai at the moment right now
#
and it reminds me of like in 1989 1990 i spent some time in this ai research lab in austin texas
#
called mcc now one of the big groups there was building the system called psych cyc and it was
#
supposed to have common sense knowledge now some people thought that was a visionary project i just
#
thought it was misguided and it didn't make any sense i mean that's what i felt because people
#
were trying to teach the machine common sense like top down right remember i talked about the paradigm
#
of logic dominating the early days of ai right so now people were saying well you know you go to a
#
restaurant well it's customary that you pay a tip well that's common sense right no that's not
#
common sense right it's something that you as a human kind of observed over and over again and
#
then just started doing it right and there's nothing logical about it right it was just
#
acquired over time and now that's one part of artificial general intelligence right you know
#
you can also think in terms of you know if you're walking down the street you know you don't expect
#
to see projectiles popping out of the earth and flying into the sky right i mean it just doesn't
#
happen it's not reality so we grow up thinking that we shouldn't expect to see that right we
#
grew up thinking that if you toss a ball into the air it just falls down right but if you grew up in
#
outer space that wouldn't happen right so here on earth common sense is that objects fall down on
#
the earth right but you can imagine a civilization out there in outer space that lives in zero
#
gravity their common sense would be completely different right it would be more along sort of
#
newton's laws which is without friction or you send something in motion it stays in motion
#
forever you know it doesn't fall down there's no gravity right the common sense there will be
#
different so what i'm seeing happening in ai these days is people are asking themselves like how do
#
infants learn how do babies learn right and the way they learn is through something called an
#
inductive bias in the data all around us right there are certain things that we see often there
#
are certain things we see never at all there's some things we see sometimes and those become
#
common sense right so as an infant you know you can see certain things like if a ball disappears
#
behind an object the infant actually expects it to come out on the other side right it's common
#
sense because that's the way objects move right so we've sort of taken this in sort of a top-down
#
approach to common sense which was the old way of ai just stuff knowledge into a system what we're
#
now asking ourselves is hey we have all this data around us right in terms of everything that
#
happens ultimately is data and we as human beings are observing this data and we're learning from
#
this data in this sort of biased way like bias is actually a good thing here right it's biasing us
#
towards things that make sense and so we learn them and then they become axiomatic for us
#
like just axiomatic common sense kinds of things right machine has no such idea right it's just
#
doing what you're calling narrow intelligence right but once the machine achieves a lot of
#
this common sense ability in the same way that humans do then machines will also have the
#
common sense that oh if you toss a ball in the air it'll fall down right that if an object you
#
know is temporarily occluded behind another and if it's moving it'll appear on the other side right
#
all of these things machines will learn on their own just like humans do right and at that point
#
we'll be closer to a gi right because then we won't have to tell a machine that you know objects
#
that go up fall down it'll have that knowledge in itself so if it needs to use it as part of
#
something else just like humans do you know it'll learn that right just like we learned you know
#
when you're you know you're a kid and your mother gives you that look that that wasn't the right
#
thing to do and you sort of learn that certain things aren't right right you pick up on those
#
cues as a human that yeah those are positive these are negative right machines don't see the world
#
in that same way which is why they don't have any common sense right and we have the common sense
#
because just acquired gradually painstakingly from the moment we're born and it's biased by the data
#
we see around us right so the new way of ai is sort of more bottom up but we're getting machines
#
to learn through experience learn through data and then gradually they'll pick up these you know
#
things that we call artificial general intelligence and then they'll become sort of generally
#
intelligent right whether they achieve consciousness in the way that humans do that's a great
#
philosophical question i'm just imagining a future in which a machine is at a restaurant
#
and is tipping the waiter i don't even know if that's utopian or dystopian but uh so i i should
#
technically have one more question for you but i'll throw in another one just before that because
#
i realized that uh you know your uh the first episode of your podcast was just out and you have
#
a bunch of other great episodes uh lined up and uh you know because of my privileged position of
#
helping you with the show and privy to some of the great guests you're having on but i'd like you to
#
tell my listeners a little bit more about brave new world your podcast what are you trying to
#
obviously it's driven by your curiosity and your passion towards this field so what are the kind of
#
areas you're planning to explore and so on and so forth tell us a little bit so it's a very broad
#
canvas i'm beginning with uh the impacts of covid on humanity and a lot of these impacts have been
#
mediated by technology and so my goal is to look at this interplay of technology and post covid
#
society and see where it's taking us you know so what's this post covid humanity where tech plays
#
a much greater role in our virtual lives but virtual i mean you know we don't see each other
#
that much we don't touch right we're sort of in this asimov kind of world of the naked sun
#
you know what's humanity going to look like in this era and to me that's a fascinating question
#
because covid has been you know a discontinuity in humanity right it's you know tech was sort
#
of creeping into our lives gradually but now it's just accelerated and it's it's turbocharged
#
and it's causing all these transformations in humanity so my objective is to look at these
#
across all areas of our lives you know whether it's how we work you know how we stay healthy
#
you know how we travel how we communicate you know how we exercise our spirituality
#
you know how machines will uh you know are accelerating in terms of their ability to solve
#
certain kinds of health problems cancers how society is being transformed like some of the
#
issues we discussed you know in terms of social media you know how should we think in terms of
#
governing this you know brave new internet world that we're entering so those are the questions
#
that i'm exploring and you know i'm surrounded by people who are a lot smarter than myself and
#
who have such deep knowledge in these areas that i find it you know just fascinating to talk to
#
them so my objective is to explore these areas with thought leaders in in these areas people
#
who really thought about these different aspects of humanity and just let them tell us and explore
#
and explore have a conversation with them where i you know largely stay out of the way and you
#
know steer the conversation in the direction that will be of interest to you know people in general
#
right so the audience of this podcast is global and it's everyone young old everyone so it's a
#
podcast for everyone and i'm kicking this off with you know some people i know really well you know
#
some of my colleagues at nyu you know arun sundarajan who's been a leader in the sharing economy
#
you know i'm talking to him about just you know what's happening you know in the u.s china war
#
these platforms i have you know my colleague and friend scott gallaway who's you know just an
#
amazing thinker in this space you know he's you know just come up with a book recently about
#
you know just post-covid you know impacts of tech on society i'm going to talk to him about
#
education it's an area that both he and i are sort of intimately familiar with and how
#
the whole education complex is likely to get transformed in the next decade i have sinan
#
aral coming on who is a professor at mit a former colleague of mine who just came out with this book
#
called the hype machine fascinating book that i highly recommend it's it's basically a book
#
where he tells a lot of stories but they're fascinating stories and they
#
illustrate certain concepts so it's it's basically science done really well integrated and told
#
through stories so fascinating book and then after that i have some other guests lined up i have eric
#
topol who's a you know a leading authority and ai and cancer and then i have john sexton who was the
#
the ex president of nyu who i you know want to have and talk about law and just the role of
#
data and evidence in you know the legal sphere you know and then i'm going to have some of my
#
colleagues from the center for data science some you know people who are like really kind of
#
out there at the frontiers of ai just talking about what's the interesting stuff happening
#
uh in ai um you know i want to have some spiritual leaders you know people with large followings and
#
talk to them about you know what's the impact of covert been on sort of the emotional and
#
spiritual aspects of our lives and you know how's that changed the way you interact with your
#
followers so it's a wide open canvas and you know my objective is to get some really interesting
#
people who've thought about these issues and who have you know something interesting to share
#
uh with the rest of the world at this intersection of sort of post-covid humanity and technology and
#
where it's taking us that's fabulous i can't wait to listen to all those episodes as the show
#
unfolds i wish they were already done so i could just binge on them right now uh now i'll have to
#
wait uh for all my listeners the show is called brave new world hosted by vasanthar so you can
#
just search for that on all podcast apps it's available for free on all podcast apps and you
#
can go to brave new podcast dot com um now my final question for you is you know uh i have a
#
tradition of asking my guests on whatever the subject they are talking about um is you know
#
looking into the future what gives you hope and what gives you despair and in the context of what
#
we are speaking about ai technology the future and so on i'll rephrase the question as what is
#
likely to keep you up at night out of worry and what is likely to keep you uh up at night out of
#
anticipation yeah so um you know i you know being an optimist i guess my i'm a little stunted uh
#
emotionally in that i don't worry enough about those you know dystopian uh states of affairs
#
uh maybe i should but the things that worry me i've sort of you know i've written about them
#
and you know i think that we've gone a little skew in terms of technology and and it's rolled
#
in our lives in some ways i think we just need to self-correct a little bit just think a little
#
more deeply about where technology is taking us because some of the areas in which it's taking us
#
don't seem to be particularly good for us there's sort of always this dark side of
#
humanity right and tech is neutral so it can also be it can also take us in those directions so if
#
there's anything that keeps me up it's that right that i i know humanity and i'm not uh you know
#
the many aspects of it that are ugly and and that aren't going to go away just as part of
#
just as part of of human beings and it's almost like you know worrying about this ugly side of
#
us taking over and technology helping this ugly side as opposed to the the good side
#
you know but being an optimist i i don't worry enough about that right but to me it's just
#
you know i've had a great life and i just imagine like if i were around for another 60 years like
#
jesus wouldn't that be amazing like you know even if i think of like what the world is going to look
#
like in 10 years i think it's going to be pretty darn amazing right uh i believe that you know we
#
will have solved a lot of big problems we will solve a lot of health care problems we need to
#
address societal problems because that is probably one of our biggest hurdles right now you know i'm
#
i'm very deeply connected to the u.s i'm also very deeply connected with india there are things i
#
love about these societies but there are things that are just so ugly about both of them that
#
worry me and that bother me i won't go into them at the moment but they are but they're there
#
and i feel that we have an ability to solve a lot of these problems to create a better society
#
through technology i really believe that we do but we also have an ability to do just the opposite
#
and if there's one thing that keeps me up at night that's what it is that i realize that we can
#
actually do some pretty nasty things and create an ugly society i don't think we'll do that but
#
we certainly have the ability to do that right but i just marvel at the kinds of things that are
#
coming down the road you know when i look back at sort of the previous 60 years you know it's been
#
one hell of a ride you know for the most part extremely positive and exhilarating right that
#
that tech you know and i'm a big believer in tech and i don't believe that the future will be uh
#
uh different in that sense i think the innovation will continue it'll even accelerate
#
right there's sort of uh and even though where you know i said we're still in the bronze age
#
of intelligence you know i think that'll accelerate and we're going to see some pretty amazing
#
stuff appear you know where we're able to invoke the machine and have it do all kinds of things
#
on demand you know that we couldn't even dream of you know at the moment you know i you know
#
you know i was talking to someone about you know when i first started using computers in iit it was
#
to solve engineering problems that you couldn't solve you know in closed form right so you have
#
to like solve them through iteration and you know i was just telling someone that you know i used to
#
write the program on punched cards and then i would put it in the cubby hole and then come back the
#
next day and there'd be an output saying you know compilation error in line seven you know uh literal
#
parents not recognized or something like that right and i'd correct that repunch that card
#
insert it put it in there and next day i'd come back and be an error saying you know program
#
compiled execution error in line 33 array out of bounds or something like that and i said oh of
#
course you know i should you know i should my termination condition needs to be better i'd
#
rewrite it put it in there come back the next day and maybe after three or four days i'd see an
#
output right it's and i'm talking like i was what 1920-21 at the time right and in graduate school
#
there was a first screen editor that came out so i could actually edit the file on the screen
#
run it and solve the same problem that took me three days in three minutes right now stuff that
#
takes used to take me three minutes takes 0.3 seconds and it's done so much better faster i
#
just you know in the future i'll just have to imagine it and it'll be done so it's going to
#
be a fascinating brave new world in my estimation i can't even imagine what it's going to look like
#
but it's going to be pretty darn amazing well thanks so much i mean but i'm i'm optimistic
#
like you and one of the things that makes me optimistic is that we can have conversations
#
like this you know two people like us across an ocean for you know all the tens of thousands of
#
people who listen so thank you so much for coming on the show and you know for your time and insights
#
and best of luck with brave new world thank you uh thank you for having me on your show it was a
#
fascinating conversation you asked some really good questions some tough ones and i know uh
#
there's one out there that you're not convinced about but thanks so much was a
#
really engaging conversation the time just flew by so thanks again
#
if you enjoyed listening to this episode do head on over to brave new podcast.com
#
or your favorite podcast app and subscribe to brave new world hosted by vasanthar you can follow
#
him on twitter at vasanthar you can follow me at amit varma a m i t v a r m a you can browse past
#
episodes of the scene and the unseen at scene unseen dot i n that at least is one thing which
#
is future proof thank you for listening
#
did you enjoy this episode of the scene and the unseen if so would you like to support the
#
production of the show you can go over to scene unseen dot i n slash support and contribute any
#
amount you like to keep this podcast alive and kicking thank you