Podcast: Play in new window | Download () | Embed
Subscribe: Apple Podcasts | Spotify | Amazon Music | Android | Pandora | iHeartRadio | Podcast Index | TuneIn | RSS | More
Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is currently a senior research fellow at the Future of Humanity Institute at the University of Oxford, and a Fellow at Reuben College.
Over 321 books from 170 plus interviews over 5 years
Newsletter sign up (new and exciting developments)
https://learningwithlowell.us12.list-manage.com/subscribe?u=08ed8a56013d8b3a3c01e27fc&id=6ecaa9189b
PODCAST INFO:
The Learning With Lowell show is a series for the everyday mammal. In this show we’ll learn about leadership, science, and people building their change into the world. The goal is to dig deeply into people who most of us wouldn’t normally ever get to hear. The Host of the show – Lowell Thompson- is a lifelong autodidact, serial problem solver, and founder of startups.
LINKS
Spotify: https://open.spotify.com/show/66eFLHQclKe5p3bMXsCTRH
RSS: https://www.learningwithlowell.com/feed/podcast/
Youtube: https://www.youtube.com/channel/UCzri06unR-lMXbl6sqWP_-Q
Youtube clips: https://www.youtube.com/channel/UC-B5x371AzTGgK-_q3U_KfA
Website: https://www.learningwithlowell.com/
Answers Sandberg links
https://www.fhi.ox.ac.uk/team/anders-sandberg/
https://en.wikipedia.org/wiki/Anders_Sandberg
https://uk.linkedin.com/in/anders-sandberg-9215ab145
https://www.instagram.com/anders_sandberg1/
Timestamp
00:00 Intro /Teaser
00:40 Oxford vs cambridge
01:10 Myanmar and food
02:35 Monkeys research and societal habits
05:30 Why Myanmar matters
08:10 Antidote to superstition
11:00 Oliver Sacks, brain fragility, Loved ones being replaced by robots
13:25 Universal theory / patterns
17:55 Humans, evolution, and skyscrapers, Project Hail Mary
22:20 Human cognition
25:55 Carl Jung, Collective unconscious, instinct leaning
31:05 How intelligent are machines now
35:55 Turning Test
38:55 Working memory in machine learning / AI
43:00 Open source system vs closed source for truthy AI/ ML system
49:00 Biohacking vs AI terrorism
58:33 Threat Registry, procurement of supplies to control and monitor systems
01:02:35 Tag switching to offset focus / power of cooking / Books
01:07:11 Cookie dish to make
01:09:32 Difficulty making bread
01:12:22 Westworld human consciousness, minimalism, Elon Musk, and emulating a whole brain Fan Question
01:22:25 Non technical person helping in brain emulation and futurism / Fan Question
1
00:00:00,000 –> 00:00:04,360
I actually told some people from the intelligence world in the US,
2
00:00:04,360 –> 00:00:06,840
so I was at a meeting and I realized that,
3
00:00:06,840 –> 00:00:09,240
oh, I got all the free letter agencies standing around there.
4
00:00:09,240 –> 00:00:13,040
I told them, by way, go home and check that I am on your watch list.
5
00:00:13,040 –> 00:00:17,720
Because if I’m not your method is not working, I should be on a lot of watch list.
6
00:00:17,720 –> 00:00:21,600
Fuck my ready to learn with all today.
7
00:00:21,600 –> 00:00:22,680
We’re doing with Andrew Sandberg.
8
00:00:22,680 –> 00:00:26,040
He obtained his PhD in computational neuroscience at Stockholm University.
9
00:00:26,040 –> 00:00:27,400
He’s out in Cambridge now.
10
00:00:27,520 –> 00:00:31,560
His focus of his work in Sweden was on neural network modeling and human memory.
11
00:00:31,560 –> 00:00:33,840
Right now he works at the future of humanity institute,
12
00:00:33,840 –> 00:00:36,000
where he centers on pie impact risks,
13
00:00:36,000 –> 00:00:40,120
estimated future technology was kind of crazy to think about and long range
14
00:00:40,120 –> 00:00:41,640
futures. Anders, welcome to the show.
15
00:00:41,640 –> 00:00:43,040
Thank you so much for having me.
16
00:00:43,040 –> 00:00:45,840
First thing, of course, I need to correct you.
17
00:00:45,840 –> 00:00:49,200
That is, of course, I’m in Oxford, not the other place.
18
00:00:49,200 –> 00:00:50,680
OK, OK, Oxford.
19
00:00:50,680 –> 00:00:53,400
I’m a graduate of Stockholm University.
20
00:00:53,400 –> 00:00:57,680
So while I find the Oxford Cambridge arrival, we kind of amusing.
21
00:00:57,680 –> 00:00:59,160
I’m not serious about it.
22
00:00:59,160 –> 00:01:01,560
Yeah, really important thing is the future.
23
00:01:01,560 –> 00:01:06,840
The recently you were given a talk and you mentioned my M R.
24
00:01:06,840 –> 00:01:07,920
And we were just talking about this.
25
00:01:07,920 –> 00:01:09,240
And I grew up on a farm.
26
00:01:09,240 –> 00:01:10,600
I love agriculture.
27
00:01:10,600 –> 00:01:12,520
I love the fact that we can feed the world.
28
00:01:12,520 –> 00:01:16,280
Like I think every farmer in America feeds a everyone in America plus 150 people
29
00:01:16,280 –> 00:01:18,280
somewhere else around the planet, which is fantastic.
30
00:01:18,280 –> 00:01:21,840
But what did you, what was significant about my M R that you were, like,
31
00:01:21,840 –> 00:01:24,760
you started the conversation and like, you went on somewhere else, but which is cool.
32
00:01:24,760 –> 00:01:28,240
But there was something there, I think, that you wanted to talk about.
33
00:01:28,240 –> 00:01:33,680
So I’m writing a paper which is going to be presented at Oxford food and cooking
34
00:01:33,680 –> 00:01:39,320
symposium, hopefully in just one week about food combination superstitions in
35
00:01:39,320 –> 00:01:39,840
Myanmar.
36
00:01:39,840 –> 00:01:42,480
And this sounds awfully narrow.
37
00:01:42,480 –> 00:01:44,880
And by my standards, this is ridiculously narrow.
38
00:01:44,880 –> 00:01:49,640
And the reason is partially, I want to give a talk at that conference because
39
00:01:49,640 –> 00:01:51,440
it is good food and interesting topics.
40
00:01:51,920 –> 00:01:55,400
But my co offers wife had also gotten this poster from Myanmar.
41
00:01:55,400 –> 00:02:00,760
It’s a poster found in a lot of kitchen saying food combinations that are deadly,
42
00:02:00,760 –> 00:02:05,040
not bad for use of wake says, but if you had pigeon and pumpkin, you would die.
43
00:02:05,040 –> 00:02:06,760
Go to and go.
44
00:02:06,760 –> 00:02:07,760
It will kill you.
45
00:02:07,760 –> 00:02:10,080
Gently and coffee, it will kill you.
46
00:02:10,080 –> 00:02:14,240
And at this point, probably you and many listeners would say, well, wait a minute.
47
00:02:14,240 –> 00:02:18,520
I had some of this, maybe not pigeon and pumpkin, but there’s some robinons,
48
00:02:18,520 –> 00:02:22,240
sensical. And actually most of the things seems to be totally fine.
49
00:02:22,240 –> 00:02:28,200
So why do they believe in these, these combination of that poster and how does that link to
50
00:02:28,200 –> 00:02:31,440
both how culture works and also how we get our food?
51
00:02:31,440 –> 00:02:38,760
It sounds like the study on monkeys where they started by having bananas on the top of
52
00:02:38,760 –> 00:02:41,520
this and stop me if you know this one, but there were bananas on the top of a pedestal.
53
00:02:41,520 –> 00:02:44,240
And then when the monkeys would go to get to it, they sprayed the monkey.
54
00:02:44,240 –> 00:02:44,920
So they wouldn’t do it.
55
00:02:44,920 –> 00:02:47,160
So then when new monkeys came in, they would stop the monkey from going up,
56
00:02:47,400 –> 00:02:49,680
but they cycled out the monkeys that knew about the hose.
57
00:02:49,680 –> 00:02:54,160
Eventually the monkeys were just reinforcing this learned behavior that they didn’t know where it came from.
58
00:02:54,160 –> 00:02:59,120
It sounds kind of like that, but for humans in terms of food, like what’s allowed and what’s not allowed from
59
00:02:59,120 –> 00:03:03,280
something that’s probably deep in the past, like food poisoning, like a, like a taste of version that
60
00:03:03,280 –> 00:03:04,600
then got became culturally.
61
00:03:04,600 –> 00:03:06,440
Got it in one.
62
00:03:06,440 –> 00:03:09,240
I think that is exactly what’s going on here.
63
00:03:09,240 –> 00:03:12,160
So when we grow up, we see what people eat and don’t eat.
64
00:03:12,160 –> 00:03:17,360
And we assume that is normal eating and somebody from another culture,
65
00:03:17,600 –> 00:03:20,960
might say, Oh, those ones are totally delicious.
66
00:03:20,960 –> 00:03:22,040
And we go, what?
67
00:03:22,040 –> 00:03:23,800
You eat those.
68
00:03:23,800 –> 00:03:30,720
The turkey is exporting crayfish to sweet and every autumn and the the Turks find crayfish.
69
00:03:30,720 –> 00:03:31,800
That’s disgusting.
70
00:03:31,800 –> 00:03:35,680
And it’s a delicacy in Sweden and so on and so on.
71
00:03:35,680 –> 00:03:41,080
We’re repeating our behaviors from others and even setting up these ideas about
72
00:03:41,080 –> 00:03:43,240
why we’re reasonable and good behaviors.
73
00:03:43,800 –> 00:03:49,600
Now, the interesting thing in Myanmar is not so much that we have various food taboo and superstitious,
74
00:03:49,600 –> 00:03:53,360
but that we have so many of them and that they’re organized in this kind of table.
75
00:03:53,360 –> 00:03:57,680
Because if you go to any country, you will have stories like in Brazil,
76
00:03:57,680 –> 00:04:01,160
but mangoes and milk are said to be a dangerous combination.
77
00:04:01,160 –> 00:04:04,920
And most modern Brazilians like mango glasses and would say, yeah,
78
00:04:04,920 –> 00:04:10,880
that’s an old myth spread by slave owners to tell the mango eating slave that they shouldn’t be drinking expensive milk.
79
00:04:11,280 –> 00:04:13,320
I don’t believe this is a true explanation.
80
00:04:13,320 –> 00:04:17,600
Ever it probably just about the merge perhaps because somebody saw what happens.
81
00:04:17,600 –> 00:04:21,440
If you pour a mango juice straight into milk, it curdles and looks disgusting.
82
00:04:21,440 –> 00:04:27,200
And then you assume this is bad milk and the fruit generally a lot of cultures have
83
00:04:27,200 –> 00:04:28,160
assumed this is bad.
84
00:04:28,160 –> 00:04:31,200
And this is probably because of curdling in the stomach.
85
00:04:31,200 –> 00:04:36,120
Of course, it curdles, but you don’t get to see that then then you have your theory about digestion,
86
00:04:36,120 –> 00:04:41,000
which is probably why it got complicated Myanmar because it’s in between China and
87
00:04:41,000 –> 00:04:48,280
India. And both of them have his food theories based on the premium modern ideas about digestion and
88
00:04:48,280 –> 00:04:53,920
the nutrients that lead to various combinations having various medicinal or harmful effects.
89
00:04:53,920 –> 00:04:57,800
The Myanmar system is totally randomized compared to China and India.
90
00:04:57,800 –> 00:05:05,320
It’s totally nonsense by either of it, but the idea that the combinations matter might be a key thing going on there.
91
00:05:05,840 –> 00:05:12,960
While in most Western systems, think about food, various ingredients, can you eat dog or whores or pig?
92
00:05:12,960 –> 00:05:16,640
Well, that depends on your culture, but it’s one ingredient.
93
00:05:16,640 –> 00:05:24,600
It’s not like dog plus pepper is absolutely a problem while pepper and dog in itself is not the problem.
94
00:05:24,600 –> 00:05:30,480
Is the I’m curious about the underlying reason why this is fascinating you in my,
95
00:05:30,480 –> 00:05:34,680
my internal guess is that since you focus on future technology and these things that come in the,
96
00:05:34,960 –> 00:05:41,440
the what’s coming, you’re the curiosity here is like how I do spread, but I might be, did that close?
97
00:05:41,440 –> 00:05:46,800
That’s close. It started just because it’s a peculiar situation.
98
00:05:46,800 –> 00:05:48,880
Why do these people believe it?
99
00:05:48,880 –> 00:05:52,240
But as you said, we are people who copy each other,
100
00:05:52,240 –> 00:05:56,720
mimesis, the imitation of others is probably a key way we’re learning stuff.
101
00:05:56,720 –> 00:05:59,320
It’s not the only way we learn things.
102
00:05:59,320 –> 00:06:03,280
There are formal ways like posters on kitchen walls and teachers,
103
00:06:03,280 –> 00:06:05,720
certainly telling us in school about the nutrient pyramid,
104
00:06:05,720 –> 00:06:10,800
but there is also these other patterns that we pick up on and build like culture, although,
105
00:06:10,800 –> 00:06:12,600
and some of them are really good.
106
00:06:12,600 –> 00:06:18,760
A lot of the implicit rules that surround us are absolutely essential for functioning well with other people or
107
00:06:18,760 –> 00:06:22,200
functioning in the world, the modern pick up, but a lot of them are your superstitious.
108
00:06:22,200 –> 00:06:23,800
They have nothing to do with reality.
109
00:06:23,800 –> 00:06:28,640
Those monkeys you mentioned earlier on, they remind me a little bit of skin,
110
00:06:28,640 –> 00:06:29,440
there’s pigeons.
111
00:06:29,760 –> 00:06:33,840
So skin there is classic behaviorist experts put pigeons into boxes.
112
00:06:33,840 –> 00:06:42,360
And if they picked at the right button, they got a food pellet and some of them got things on a particular times as a
113
00:06:42,360 –> 00:06:44,440
control group exactly at noon.
114
00:06:44,440 –> 00:06:50,440
And if a pigeon happened to do something else just before my standing on one leg and then they got a food
115
00:06:50,440 –> 00:06:54,960
pellet, it assumed that standing on one leg is something that sometimes gives me a food pellet.
116
00:06:54,960 –> 00:06:57,120
So some of the pigeons became superstitious.
117
00:06:57,120 –> 00:06:59,560
They started doing all sort of weird things.
118
00:07:00,040 –> 00:07:04,080
To get the food that it had nothing to do with it getting the food that happened.
119
00:07:04,080 –> 00:07:10,360
Now the important thing for the future is of course, many of these superstitions are really bad for us.
120
00:07:10,360 –> 00:07:16,680
For example, by the way, there’s hot food and cold food and they’re kind of different categories that you need to be careful with.
121
00:07:16,680 –> 00:07:19,200
That’s very common in many societies.
122
00:07:19,200 –> 00:07:27,480
And this doesn’t matter very much in many situations, except sometimes you really want to boil water to give to somebody who’s got a
123
00:07:27,480 –> 00:07:33,040
reaver fever. But the fork idea would be, yeah, he is too hot.
124
00:07:33,040 –> 00:07:37,920
He needs some cooling food like water straight from the river that has not been boiled.
125
00:07:37,920 –> 00:07:43,440
So in this case, the old superstitionism really bad also for introducing modern healthcare.
126
00:07:43,440 –> 00:07:54,400
So understanding this dynamism is how do we end up getting stuck in weird beliefs is something that I think might be quite useful for thinking much further ahead of them in much more high takeaways.
127
00:07:55,760 –> 00:08:03,600
It feels it seems like it’s definitely based on the pigeon idea, something innate in humans that we innate animals.
128
00:08:03,600 –> 00:08:07,240
Like there’s some like learned behavior just responding to stimuli in the environment.
129
00:08:07,240 –> 00:08:15,760
I wonder if the fact that we have like a frontal cortex and the ability, like if you have a phobia, for instance, you can like slowly be exposed immersion therapy.
130
00:08:15,760 –> 00:08:17,800
I think it’s what’s called and slowly work through that.
131
00:08:17,800 –> 00:08:25,160
If the like the antidote to superstition is something similar to that where you can like be like exposed slowly to it or
132
00:08:26,120 –> 00:08:35,720
or if you could ever like as a species to get past superstition, because I think it does seem like something that would just is like phobias or have is like something might be just like in bread in us and then we just have to counteract it when we build systems.
133
00:08:35,720 –> 00:08:46,480
We’re kind of built to develop phobia, again, certain kinds of things. If you make a list of phobias people have you will find that phobias against animals are fairly common.
134
00:08:46,480 –> 00:08:54,000
You find much fewer conphobias against inanimate things. It happens. There are people who actually have phobias against snow.
135
00:08:54,560 –> 00:08:59,240
But that’s very rare, but phobias against snakes and insects, they’re everywhere.
136
00:08:59,240 –> 00:09:03,800
And it seems like among higher primates, that seems to be almost a built-in thing.
137
00:09:03,800 –> 00:09:07,520
A baby monkey or a baby human is not normally afraid of snakes.
138
00:09:07,520 –> 00:09:23,040
But if they hear a scream from their parents when we see a rubber snake, they will almost instantly develop a phobia and at least a bit of a fear for if you try this, and this has apparently been done to chimpanzee babies with plastic flowers.
139
00:09:23,760 –> 00:09:33,720
Nothing happens. We haven’t got that built-in receptor for fighting flowers. You need to have much more nasty experiences around flowers to before you start to thinking that they’re frightening.
140
00:09:33,720 –> 00:09:44,680
So here we have a built-in system for it. And sometimes it can be overcome. I can recognize that I got a phobia and then either using my will power and my mental flexibility to do that.
141
00:09:45,000 –> 00:09:53,880
That’s how I got over my will power that I used to have these days. I generally don’t like wasps, but I’m not running away like crazy when I see it.
142
00:09:53,880 –> 00:09:57,000
And I could go to a psychologist doing a proper treatment.
143
00:09:57,000 –> 00:10:00,240
This doesn’t work for other weird beliefs, of course.
144
00:10:00,240 –> 00:10:06,560
Especially when you have delusions, you’re extremely resistant to any evidence against it.
145
00:10:06,560 –> 00:10:14,560
And sometimes you have this bizarre neurolodic state, like I think it’s capric delusion. I’m always mixing it up with a cotard delusion.
146
00:10:14,880 –> 00:10:17,320
So one of them is that you believe that you’re dead.
147
00:10:17,320 –> 00:10:24,120
And that is kind of an easily disproven in some sense. The doctor typically was, but you’re breathing.
148
00:10:24,120 –> 00:10:29,040
And one classic response from one’s patient was, yeah, I didn’t know that people did that.
149
00:10:29,040 –> 00:10:34,560
All right. The doctor kept on demonstrating that the patient was alive.
150
00:10:34,560 –> 00:10:39,720
The patient was, of course, just adapting his belief about how being dead was.
151
00:10:40,120 –> 00:10:45,520
The other delusion is that your friends have been replaced by replicas and droids or ninjas or something.
152
00:10:45,520 –> 00:10:54,440
And again, as a delusion, evidence doesn’t affect these ones are raw or extreme, but we have things that are kind of in between.
153
00:10:54,440 –> 00:10:58,760
And of course, our political opponents are all suffering from very bad delusions.
154
00:10:58,760 –> 00:11:05,360
But once you start being honest with yourself, rears, I probably have a whole bunch of these things stuck here.
155
00:11:05,360 –> 00:11:08,560
And I wonder which of them I actually would want to get rid of them.
156
00:11:09,720 –> 00:11:14,280
There’s a Oliver Sacks is a great writer on neurological issues.
157
00:11:14,280 –> 00:11:21,600
If anyone’s interested in a interesting read, I think the one I’m is, I’m a, I’m a stook my wife for a hat.
158
00:11:21,600 –> 00:11:29,600
It’s like, when you get in that stuff is it kind of, it, it shows how, how fragile the human brain is and how fragile every day is.
159
00:11:29,600 –> 00:11:33,720
And it’s kind of a marvel that we are able to have eight billion of us running around and not.
160
00:11:33,720 –> 00:11:39,280
I don’t want to have, you know, but the, the, I think the one more that you’re,
161
00:11:39,520 –> 00:11:41,600
your loved ones have been replaced by aliens or robots.
162
00:11:41,600 –> 00:11:47,160
Apparently the, if you listen to their voice, you can hear that they’re them, but when you look at them,
163
00:11:47,160 –> 00:11:49,760
so it’s like different parts of brain or messed up, which is interesting.
164
00:11:49,760 –> 00:11:50,760
Yeah.
165
00:11:50,760 –> 00:12:01,400
One theory I read, which I don’t know whether we have strong evidence for is that the visual recognition system for faces has lost its connection to the emotional system.
166
00:12:01,400 –> 00:12:04,920
So normally when you see a loved one, you get that little kick off.
167
00:12:04,920 –> 00:12:05,640
Oh, yes.
168
00:12:05,640 –> 00:12:06,840
There she is.
169
00:12:06,840 –> 00:12:09,400
But now you don’t get that.
170
00:12:09,960 –> 00:12:13,280
And then that feels like, okay, we must, something is wrong here.
171
00:12:13,280 –> 00:12:17,320
And then you, then you might jump to this weird conclusion.
172
00:12:17,320 –> 00:12:25,800
There was one interesting case where it was a lady who presented with the problem that she felt the capillary in her kitchen drawer had been replaced with identical copies.
173
00:12:25,800 –> 00:12:27,920
That’s a very unusual case.
174
00:12:27,920 –> 00:12:30,840
And she obviously must have cared a lot about her capillary or something.
175
00:12:30,840 –> 00:12:34,680
But again, it was this weird brain error, causing it.
176
00:12:34,680 –> 00:12:39,200
Now many of the errors, all of her sex brings up in his books.
177
00:12:39,520 –> 00:12:43,520
Or interesting because he’s described in perhaps the most vivid cases.
178
00:12:43,520 –> 00:12:46,160
Many cases are much more boring and everyday.
179
00:12:46,160 –> 00:12:51,800
All skin in neurology is about the working day and most of them are not too exciting.
180
00:12:51,800 –> 00:12:59,240
But quite often they get extremely weird because we’re not normally aware of just how weird our brains are.
181
00:12:59,240 –> 00:13:02,280
And a lot of our normal function is kind of slightly short.
182
00:13:02,280 –> 00:13:03,480
We’re making facts up.
183
00:13:03,480 –> 00:13:04,640
We’re making arguments up.
184
00:13:04,880 –> 00:13:09,160
We’re making up our visual and auditory field of feeling in the details.
185
00:13:09,160 –> 00:13:14,720
And most of the time we’re never getting so close to reality that we can see the holds in that.
186
00:13:14,720 –> 00:13:21,320
Yeah, the interesting thing about talking to so many different people in so many different fields is sometimes it does feel like
187
00:13:21,320 –> 00:13:25,840
there are similar themes that are applied in many different areas.
188
00:13:25,840 –> 00:13:29,560
And then it makes you think that one of, I mean, this is a wholly different idea.
189
00:13:29,560 –> 00:13:35,880
But like Einstein kept working up into his last days trying to find a universal theory that combined everything and that was going on in physics.
190
00:13:35,880 –> 00:13:50,600
And sometimes I wonder like is it me making a pattern that doesn’t exist or is there a pattern there like the night sky and the constellations that that exists inherently that I’m just able to appreciate, which is kind of interesting.
191
00:13:50,600 –> 00:13:53,280
Just the way I guess you wouldn’t be able to prove the difference.
192
00:13:53,280 –> 00:13:58,640
Well, I think sometimes we can do experiments to notice that.
193
00:13:59,000 –> 00:14:00,400
But there is this tricky thing.
194
00:14:00,400 –> 00:14:02,000
We are evolved creatures.
195
00:14:02,000 –> 00:14:11,440
Our ancestors developed brains because it was convenient to have a neural bound close to the sensor of organs at the front side of the organist of gradually.
196
00:14:11,440 –> 00:14:17,840
It gets more and more elaborate to avoid getting eaten and getting more food and building a little bit of a model of the world.
197
00:14:17,840 –> 00:14:23,440
And those organs that had too bad model of the world couldn’t learn the right things about the world.
198
00:14:23,440 –> 00:14:26,640
They tend to get eaten or didn’t have enough grand children.
199
00:14:27,160 –> 00:14:35,080
So eventually we ended up with big brains that are pretty good at building a model of a world, but a lot of assumptions about the world are already built in.
200
00:14:35,080 –> 00:14:47,760
And this is super helpful because as anybody who’s been programming your network knows if you can put in some useful assumptions to lower the dimensionality of a search space, you can train much more effectively.
201
00:14:48,480 –> 00:14:58,880
So when the human baby opens its eyes and sees the world, the visual cortex is already kind of prepared for you getting a two dimensional map of the something.
202
00:14:58,880 –> 00:15:05,440
It still doesn’t know how to do this if reading that is something that the brain is going to learn over the coming months.
203
00:15:05,440 –> 00:15:17,640
But those signals are already kind of going to system that preformatted for the assumption that you have two eyes and they are going to build these higher order representations of edges borders objects.
204
00:15:18,000 –> 00:15:27,360
And then gradually, but these obvious are moving their constant, they exist in freedom of special space, they have a relationship to your body, that is already kind of built in.
205
00:15:27,360 –> 00:15:37,560
If the visual nerve attached somewhere else to the cortex, that ought to be court it could also learn it, but it’s kind of tricky and we don’t know how the neural program is going.
206
00:15:38,640 –> 00:15:48,600
But when you have this other situation, but when we start doing astronomy, when we started to think about space and geometry and these things and building our big nice theories.
207
00:15:48,600 –> 00:15:56,280
They are good because you can explain them, I can’t explain how to do free dimensional vision that’s just built into my neural network.
208
00:15:56,280 –> 00:16:04,160
But I can kind of talk to you about the geometry of space, I can start talking about it in the Euclidean and non Euclidean geometry.
209
00:16:04,520 –> 00:16:14,480
And we can even start looking at the night sky and making a big model there. Now we found a regularity that the babies are not normally finding and we made it transfer bullet in another way.
210
00:16:14,480 –> 00:16:17,000
So now we can have it as a shared understanding.
211
00:16:17,000 –> 00:16:22,520
The problem is of course some shared understanding of totally wrong, some of them are oversimplified.
212
00:16:22,520 –> 00:16:27,680
So if you say that the earth is flat, that’s a good approximation as long as you’re not moving too far.
213
00:16:28,360 –> 00:16:42,000
Same the world is spherical is a good approximation as long as you’re not trying to do proper geodesic saying the world is roughly an ellipsoid is good enough until you want to put up your satellite orbits and suddenly you need to do something even more elaborate.
214
00:16:42,000 –> 00:16:52,680
In this case, we even have its metapherry that have various levels of refinement, but we’re probably able to maze where we don’t even have those metaphories or even those basic theories yet.
215
00:16:52,680 –> 00:16:56,600
And then some of them might be possible to get because there is no good pattern.
216
00:16:57,640 –> 00:17:02,480
I was recently reading a book called Project Hail Mary, which is the author of the Martian.
217
00:17:02,480 –> 00:17:09,480
I don’t know if you’ve read either of those, but the really fantastic books appear into the science, solving problems and an in a fictional sense.
218
00:17:09,480 –> 00:17:15,520
But in the book, there is basically there’s like the humans and they find another alien and they kind of act similarly.
219
00:17:15,520 –> 00:17:20,560
And they wondered at some point, why is it that my cognition, my ability to think and reason similar to yours?
220
00:17:20,560 –> 00:17:27,240
Like there’s like, there’s differences that are like once really good at math instinctually, but the other ones like what humans are, you know, have their advantages or whatever.
221
00:17:27,520 –> 00:17:36,520
And the theory that was postulate is that like the roof of our cognition was set based on how smart the animals in our environment was.
222
00:17:36,520 –> 00:17:48,840
And I was recently talking to a shark expert and they talk about how great white sharks are like the smartest in their area because they’re an apex predator, but they’re only spying off to attack those types of animals.
223
00:17:48,840 –> 00:17:55,800
I’m wondering if the, if our cognition is set because I was wondering about this thing, how is it that the people that were throwing a stick in the, in the field?
224
00:17:56,280 –> 00:18:03,520
Can do you clear in geometry? Like I said, I have to do like in basic calculations to, because that’s one of our sweetest traits like we’re really throwing things.
225
00:18:03,520 –> 00:18:09,720
And so that how does that go from from there to building skyscrapers? And at the same time, is it set?
226
00:18:09,720 –> 00:18:18,280
I wonder like where the parameters get set and defined? Like is there an upper limit that we’re that we can evolve to or that we’re set in based on our physiology?
227
00:18:18,760 –> 00:18:29,760
And in the, in the, the project, how many the idea was the environment and the things that you have to hunt are where your upper brain ranges, which was similar to the other speeches as well.
228
00:18:29,760 –> 00:18:36,280
We’ve never seen another alien, so we don’t know about. I like that idea, or I thought that idea was interesting and it seemed like there might be some merit there.
229
00:18:36,280 –> 00:18:45,680
Consider like where else would our, like we’re not, we’re not going to be like, like evolution is kind of lazy and then our brains take up so much energy. So I think if we were like too smart with probably shave it down a little bit.
230
00:18:46,760 –> 00:18:56,000
Yeah, brains are very energy expensive. So if you’re in a nutrient restricted environment, you’re probably going to cut it down.
231
00:18:56,000 –> 00:19:02,800
We’ve seen that for example, in the evolution of bats, where having light heads and it’s more important than being super smart.
232
00:19:02,800 –> 00:19:08,080
So they have scaled down a lot of the brains except of course we’re hearing and navigation domains.
233
00:19:08,760 –> 00:19:20,760
But the interesting thing about the humans is that sometimes I joke, but maybe we’re as stupid as possible species that can develop that technology can see because you could imagine throughout the humans evolving higher high.
234
00:19:20,760 –> 00:19:23,760
And then up until that point where it suddenly takes off.
235
00:19:23,760 –> 00:19:30,760
And it’s no longer a question about getting a better brain, but Robert, now we work together as a team.
236
00:19:30,920 –> 00:19:41,360
We can use each other’s brains. We can tell each other things. We can share knowledge between many more brains. And that’s much better than having a super genius brain in many, many domains.
237
00:19:41,360 –> 00:19:49,720
There are exceptions, but when it comes to surviving, well, first of African savannah, teamwork is probably beating being a genius.
238
00:19:50,360 –> 00:20:05,840
And once you have a good teamwork, you can start doing everything. So as soon as you get that agriculture, you can start having people who are acting as repositories of information and you develop tools like writing where you can put information in externally in the world, you can really start taking off.
239
00:20:05,840 –> 00:20:15,040
I think a lot of this is linked to that. We have a really good working memory. We can maintain several things in our heads that are not present at the moment.
240
00:20:15,120 –> 00:20:24,720
We can also communicate our language is really powerful because we can sit around the campfire and discuss whatever if we have a hunt tomorrow.
241
00:20:24,720 –> 00:20:35,120
So we bring up this hypothetical hunt and then we can envision it. We can make plans. We can agree on, well, let’s go to the water hole and you take the other side.
242
00:20:35,520 –> 00:20:48,280
And then we can even make this group intention that together we’re going to do this and we can even decide how to divide this boys. So we are not even going to get into a mayor quarrel afterwards because we already have a plan for that.
243
00:20:48,280 –> 00:20:56,360
Now that sometimes doesn’t work, but it works well enough to make us raw the fearsome to other animals on African savannah.
244
00:20:56,360 –> 00:21:04,920
And then you can scale it up because thinking about stuff that doesn’t exist here that allows you to think about stuff that doesn’t exist in the first place or inspections.
245
00:21:05,600 –> 00:21:12,280
So I think that is one reason for us success. Another one I like I mentioned earlier is that we’re very good at imitation.
246
00:21:12,280 –> 00:21:27,160
Many animals are bad. So they imitate each other, but that’s also not the most effective way of coming information because telling people stuff can be very effective. You get abstractions that so we have several tools that are disposed of and we inventing new ones all the time.
247
00:21:27,800 –> 00:21:45,520
And part of that is probably because of our over and designed the front allows we can change our behavior. If you tell me the right sentence, I might change the way I live my life. It’s rare that that happens, but we have certainly all encountered people who could be changed by having a realization or meeting somebody.
248
00:21:45,520 –> 00:21:52,680
That doesn’t usually happen that much to cats. You can tell a cat almost anything and they’re not going to change behavior very much.
249
00:21:54,080 –> 00:22:06,960
Partially because of lack of language, but also there is this behavior of flexibility in you months that is both wonderful. We cannot adapt to almost anything and horrifying. We adapt to anything and we quite often think, oh, this is normal. This is totally fine.
250
00:22:06,960 –> 00:22:08,560
But
251
00:22:08,560 –> 00:22:14,360
So, um, do you think that our way of cognition.
252
00:22:15,320 –> 00:22:23,960
Looking at other animals on the planet is I guess the only other ways that we have cognition intelligence as a, you know, a meter to gauge against is our cognition.
253
00:22:23,960 –> 00:22:38,520
Weird compared to other animals because I always talk I always read these reports of people talking about like, oh, our whale Santian, they have a, they have names for each other. They have all these different things. But do they do things thinking a similar way to their brain structure is working a similar way because most most animals have
254
00:22:39,000 –> 00:22:47,480
They haven’t read it about because I’m very interested in this because especially if like whales were like somehow sent to you and we’re like thinking like us and you know, we’ve been hunting them and stuff which is terrible, but, um,
255
00:22:47,480 –> 00:22:53,080
Do you do you think there’s something special in the in the structure and the in the way that we
256
00:22:53,080 –> 00:22:56,640
In our cognition that sets us apart.
257
00:22:56,640 –> 00:23:06,880
In itself, like, I don’t have like phrased this this question, but like, I think it’s a very good question and we don’t have a great answer to it yet. I have no
258
00:23:07,720 –> 00:23:26,440
I think the honest thing is yeah, the researchers disagree. My view is yeah, look at the monkey brain, look at the human brain. It got all the same parts from a low level perspective. There is no real difference. There is more stuff in the human brain in the frontal lobe region. Yeah, there are
259
00:23:26,440 –> 00:23:36,400
One weird cell type that seemed to be unique to high primates, but it’s not entirely obvious what it’s doing anything magical. It might just be a random electronic component.
260
00:23:36,880 –> 00:23:42,240
The real difference to be that we have way more of some things that other animals have a bit less on.
261
00:23:42,240 –> 00:23:48,480
So for example, when it comes to working memory, you must a good at thinking about things that are not present.
262
00:23:48,480 –> 00:23:54,640
We are good at knowing that I know that you know that I know games like that.
263
00:23:54,640 –> 00:24:06,040
We’re higher primates like chimpanzees are not bad at it because if you’re a social animal, you totally need to know a little bit about how to cheat and avoid cheating and
264
00:24:06,760 –> 00:24:13,840
try to help monkeys that don’t know certain things about dangerous pedestals and bananas, etc.
265
00:24:13,840 –> 00:24:22,280
So all of us is going on there, but not to be same extent and it’s a little bit perhaps like when you withdraw a control rod from a nuclear reactor.
266
00:24:22,280 –> 00:24:34,840
At a certain point, the amount of energy output goes up quite dramatically because you get each extra unit of working memory allows you to do an order of magnitude more complex thinking.
267
00:24:35,320 –> 00:24:50,640
Language is also pretty unique because it’s so open and open and that again, at this point, people will be bringing in these chimpanzees that can do sign language, a various parrots of it’s not entirely clear that we got a total monopoly on it and to use the same thing.
268
00:24:50,640 –> 00:25:00,040
It’s just that you must make tools and then we carry them around if it’s a good to partially because it’s so much easier because we got our hands free because we’re walking on two legs.
269
00:25:00,280 –> 00:25:11,680
If I were a chimpanzee to carry a tool, it would be really hard on manacals so there are always boring practical reasons too, but generally I think that most mammalian brains are fairly alike.
270
00:25:11,680 –> 00:25:21,200
So when we get to things like sentience, this is kept disturbing because yeah, I probably have no reason to think that a mouse is an eless sentient than I am.
271
00:25:21,520 –> 00:25:31,120
It’s probably not thinking very much about the state of the world is probably a rob a scene mind that so it might not be worth to worry that it happens to be inside a neuroscience lab.
272
00:25:31,120 –> 00:25:38,960
I would be Robert worried if I realize I’m an occasion and neuroscience lab, but the basic centers might be the same there.
273
00:25:38,960 –> 00:25:49,880
And that of course leads to all sorts of very interesting issues about ethics or treating animals and other organisms, but also okay we’ve been studying this for a long while and we’re still not great.
274
00:25:49,880 –> 00:25:52,760
We understand that even these organs that are related to us.
275
00:25:52,760 –> 00:26:19,840
The is gets to a similar topic that I wanted to ask you, which is the so Carl Carl young thought like that there was a collective unconsciousness of some type that we inherited memory from our past and that kind of touches on like habits and stuff, but even whether whether sure or not it’s not necessarily the it’s like often wonder often our instincts are guiding us towards something and maybe our logic of how we interpret it is wrong.
276
00:26:19,840 –> 00:26:33,760
But there’s something there to think about and so you’re you’re involved in so many different areas and so I’m curious where where the edge of your instincts are telling you that there’s something they’re worth digging and like is there is an avenue where your gut says like if I dig there, there’s something more.
277
00:26:33,760 –> 00:26:36,520
There’s something that’s really interesting that I don’t think people have thought about.
278
00:26:36,520 –> 00:26:41,000
I know there’s like several topics just in the conversation before that you like point out that I’ve never thought about before.
279
00:26:41,000 –> 00:26:42,400
So this is like it’s really cool.
280
00:26:42,400 –> 00:26:48,200
Someone’s like I don’t know to the extent like you use your gutter, you know, you think about things to find ideas to dig into.
281
00:26:48,720 –> 00:27:07,640
Yeah, so being an academic sitting in a philosophy department at mayor university and trying to write papers I’m of course trying to pretend that I’m his rational mind that is just thinking the sublime thoughts and then write them and very carefully paper with all the correct scientific ways of checking the validity of everything.
282
00:27:07,640 –> 00:27:17,920
And of course anybody who been around academics know that no, there is all sort of the normal mother human thinking going on and then we refine that into a somewhat presentable paper.
283
00:27:18,160 –> 00:27:21,840
Eventually and even selecting your research topics.
284
00:27:21,840 –> 00:27:30,480
I’m literally sitting one floor above the global priorities institute where we’re working on questions like what are the most important things to fix in world.
285
00:27:30,480 –> 00:27:33,200
We’re trying to understand that at my institute too.
286
00:27:33,200 –> 00:27:43,960
We have realized that setting your priorities is super important because typically the most important thing you could be doing is probably an order of magnitude more important than the second most important thing.
287
00:27:44,400 –> 00:27:48,640
So spending a lot of time getting your priorities in order is quite often worthwhile.
288
00:27:48,640 –> 00:27:59,040
Yet I don’t do that that much and I can of course try to come up with some nice excuses, but in practice, I’m a rather disorganized person.
289
00:27:59,040 –> 00:28:09,640
I’m solving it instead by creative procrastination jumping between different topics rapidly because I get bored and tired of it quickly and then I replenish myself by doing something else.
290
00:28:10,360 –> 00:28:17,640
And this is of course where gott instincts come in handy. In some cases, it just like I read I can do something useful here.
291
00:28:17,640 –> 00:28:27,040
This is something that budgets when I’m hope candidate. I can see that if I do a little bit of math on this or a little simulation, I can actually answer a question.
292
00:28:27,040 –> 00:28:31,520
And then I’m just doing that because it’s an opportunity. It’s a low hanging fruit.
293
00:28:32,120 –> 00:28:41,560
Sometimes I notice this seems to be a kind of crucial thing. It will regardless of what the answer is it’s going to affect the whole future. I should probably look more at it.
294
00:28:41,560 –> 00:28:49,960
But often these gott instincts are slightly unreliable about many topics because when does our intuition work well.
295
00:28:49,960 –> 00:29:00,560
Well, involves environments where it’s been trained on a lot of evidence, even because our all our ancestors have to deal with it or because we have a lot of experience dealing with people.
296
00:29:01,080 –> 00:29:10,840
After a while, you actually learn how to recognize somebody who’s full of bullshit. Sometimes you notice there is something off about this guy. I don’t know what it is.
297
00:29:10,840 –> 00:29:27,800
But in that case, you should probably trust to get feeling so you might still want to check if this is the correct one because sometimes it’s instead pray this, which is a title we give to get feelings that we’re not proud of and might actually be immoral and bad that we ought to change.
298
00:29:27,880 –> 00:29:34,040
Just like the phobia might be a natural thing. So sometimes you actually want to modify them.
299
00:29:34,040 –> 00:29:44,360
But typically intuitions work well when we have a lot of data, a lot of feedback. It’s very much like the current neural networks. You have a lot of data to train them and they give their intuitive responses.
300
00:29:44,360 –> 00:29:55,240
But you also have a problem in domains that are very different. So if you’re trying to do theoretical physics based on your gut feeling, you’re going to just end up in total nonsense.
301
00:29:55,440 –> 00:30:08,800
Because theoretical physics doesn’t work like that. The constraints that happen in quantum mechanics or cosmology are so far away from anything we normally experience that those gut feelings are not good to be good.
302
00:30:08,800 –> 00:30:19,680
Of course, once you talk to a senior astronomer who’s been hanging out in astronomy for decades, he or she will probably have a decent gut feeling about what’s a good astronomy question.
303
00:30:19,880 –> 00:30:29,320
How do I make my telescope do this? Can I get that kind of data? Is this a good project or not? You do develop that even in these weird abstracts, fits.
304
00:30:29,320 –> 00:30:35,240
Mathematicians do the same and they’re amazingly good at knowing sometimes when a problem looks fruitful or not.
305
00:30:35,240 –> 00:30:42,760
And to us, outside is that looks like total magic. How can you even know that you haven’t solved the problem yet? You seem to know something about it.
306
00:30:43,640 –> 00:30:54,280
But the problem is, of course, gut feelings are just like that sometimes they’re wrong. And usually my gut feeling about gut feelings is that I want to interrogate them.
307
00:30:54,280 –> 00:30:57,520
So is there.
308
00:30:57,520 –> 00:31:13,440
So related to, I think, intelligence and gut feeling. I’m curious about how the, the guy, people have been talking more about how like the gut biome affects cognition and stuff. And so there’s a bit of your work on emulating the brain.
309
00:31:13,600 –> 00:31:22,680
But before talking about emulating the brain, I’m very curious because we keep mentioning the machines, your thoughts on how intelligent machines are now. I’m a read about LLMS. I’m reading a book on machine learning.
310
00:31:22,680 –> 00:31:34,400
And I think someone made a, there’s like a meme saying like if you call machine learning AI, just like the probability statistics, like people get upset. But how, how intelligent is machine intelligence now.
311
00:31:35,600 –> 00:31:54,520
Yeah. And I think that gets back to that issue about what makes us special and why the world from Africa, Savannah. So one way of defining intelligence that comes from Shane Legge, one of the founders of deep mind is that it’s an ability to solve problems to reach your goals in general environments.
312
00:31:55,040 –> 00:32:08,120
Now, the interesting part here is general environments. That is, this is something that works both from Africa, Savannah, maybe in a polar desert. It’s something that works both in the boardroom and on the street.
313
00:32:08,120 –> 00:32:17,200
That general ability. That’s usually what we call it. Tell us now there are many things that are specialized and do a much better job in one environment.
314
00:32:17,520 –> 00:32:25,800
And they might as long as that, the market is for only few we care about, we might say that’s very intelligent, but generally we’re interested in general intelligence.
315
00:32:25,800 –> 00:32:39,200
Now, when it comes to machine intelligence, for a long while people were not building generally intelligent machines very much people will say, yeah, narrow AI is actually what we can sell and make money from. So let’s go for that.
316
00:32:39,480 –> 00:32:49,200
And then measuring its intelligence was not even interesting. You’re just interested in performance. How good is it that detecting cats in the pictures in pictures.
317
00:32:49,200 –> 00:33:00,560
But what happened over the last few years is that we found that the large language models can fake intelligence in a way that’s so good that it actually approximates real intelligence.
318
00:33:00,760 –> 00:33:18,560
And it’s leading to this weird situation, but yeah, maybe very just stochastic parents may be, we’re done by the pile of rocks, as I said, but we’re finding out what action is stochastic parrots or piles of rocks actually can do quite a lot of clever things, but we normally would say, yeah, that requires a bit of intelligence and understanding.
319
00:33:19,240 –> 00:33:32,720
And you can be critical say, yeah, but that’s just what it looks like because we’ve been trained on literally billions of people’s output. They’re very good at fake what you must would do. Well, we might be using real intelligence. So they’re just imitating that.
320
00:33:32,720 –> 00:33:44,840
But if you imitate it well enough, then that might be still practical useful. If I want to very quickly grab the stuff and put it together into a paper.
321
00:33:45,800 –> 00:33:51,200
Let’s assume that I don’t care about the quality. I can very easily use a language more than to make a possible paper.
322
00:33:51,200 –> 00:34:00,560
It’s something that would have been much harder before. And the interesting thing is we can even use it to design other tasks.
323
00:34:00,560 –> 00:34:14,360
And at some point you say, this is actually looking a bit like intelligent paper. It’s not generally intelligent enough. It has a lot of laws and unreliability, which means it’s very, very dangerous to rely on. It’s a bit short.
324
00:34:15,080 –> 00:34:24,440
But it might be very much like you have your very eccentric friend who is very good at some things and very bad at other things. And he’s also totally overconfident that he can do everything.
325
00:34:24,440 –> 00:34:33,840
That friend is sometimes somebody you want to bring with you. Sometimes you don’t want to put him in charge of things, but there are some tasks that you can leave to him.
326
00:34:33,840 –> 00:34:36,880
Now, how intelligent is that friend?
327
00:34:37,640 –> 00:34:47,560
It’s hard to make that overall assessment. You could perhaps get some kind of IQ score, but that’s not going to tell you what you actually want to know. And that is, where can I trust him to do a good job?
328
00:34:47,560 –> 00:34:52,120
Where can I know that here is just going to pretend that he knows what is doing.
329
00:34:52,120 –> 00:34:57,240
Those countries are more important. And this is, of course, the death of the ulterior test.
330
00:34:57,240 –> 00:35:05,000
Alan Turek, the two-spreaded, never claimed that his test was intended to measure real intelligence. It was making a good philosophical argument.
331
00:35:05,000 –> 00:35:12,200
That if something can never win this test, we must admit that it looks like it’s thinking. That’s essential what he’s saying.
332
00:35:12,200 –> 00:35:18,920
And back in the 1950s, this is something like an outrageous claim because computers were not all like that.
333
00:35:18,920 –> 00:35:25,480
So the whole idea that something could fake thinking well enough, but we couldn’t tell it apart, was a weird claim.
334
00:35:25,720 –> 00:35:30,120
But he was right in his prediction that yeah, eventually this is going to look totally normal.
335
00:35:30,120 –> 00:35:34,840
Now our problem is, yeah, now we get this stuff that is instinctual from a copused person.
336
00:35:34,840 –> 00:35:39,960
And that might be good enough for quite a lot of jobs because we can be done by confused people.
337
00:35:39,960 –> 00:35:46,440
However, nobody really cares about it. You’re interested in more because it’s kind of only interesting that you, of course, have his bullshit generators.
338
00:35:46,440 –> 00:35:49,880
Do a very credible job of talking like a normal person.
339
00:35:49,880 –> 00:35:52,840
So there was a, oh, sorry, go ahead.
340
00:35:52,840 –> 00:36:04,120
Hey, no, I was just, I was thinking there’s a person who made like a terrain test app type thing where either you were talking to a human or another person or LLM type thing.
341
00:36:04,120 –> 00:36:07,320
You had a guess which one was and they were trying to see how often people got it right.
342
00:36:07,320 –> 00:36:10,680
And apparently like you couldn’t tell within modern stuff.
343
00:36:10,680 –> 00:36:13,480
I took the test and I got I’m like 90% right.
344
00:36:13,480 –> 00:36:17,960
The key for me is I was cheating and I kept asking them what love is like to experience.
345
00:36:19,960 –> 00:36:22,760
But then I like times of the time the humans know either.
346
00:36:22,760 –> 00:36:30,200
There are ways of telling the support so far, but that just changing.
347
00:36:30,200 –> 00:36:33,880
It’s a little bit like the image generation systems.
348
00:36:33,880 –> 00:36:38,280
Last year people were all joking about them making the wrong number of fingers.
349
00:36:38,280 –> 00:36:45,640
But sometime early spring, stable diffusion just stopped making the wrong number of fingers unless it gets confused by other stuff.
350
00:36:46,200 –> 00:36:50,760
They just get better. That doesn’t mean that it now understands what the hand is.
351
00:36:50,760 –> 00:36:55,400
It still has this very weird perspective on what the visual world is.
352
00:36:55,400 –> 00:36:57,400
And the more advanced language models.
353
00:36:57,400 –> 00:37:00,040
Insofar they have an understanding.
354
00:37:00,040 –> 00:37:02,440
It’s not exactly what we would call an understand.
355
00:37:02,440 –> 00:37:07,720
It seems like they have internal representations of a state of a takes tour.
356
00:37:07,720 –> 00:37:12,440
You can describe them running around the library for they seem to be actually generating mental maps.
357
00:37:13,400 –> 00:37:18,120
But it’s still a rather rudimentary. It might also be that it doesn’t generalize very far.
358
00:37:18,120 –> 00:37:22,280
One possibility might be that this is about as good as it gets.
359
00:37:22,280 –> 00:37:24,760
But you don’t have much more text data to train them on.
360
00:37:24,760 –> 00:37:31,160
And you can’t actually do them more advanced forms of thinking because you don’t have enough examples in the text.
361
00:37:31,160 –> 00:37:34,200
Now that is one possibility.
362
00:37:34,200 –> 00:37:38,520
Another possibility might be that you just keep on scaling this up and you actually do get generally
363
00:37:38,520 –> 00:37:44,760
intelligence by faking it till you make it. Because that’s one of the big problems I have as a
364
00:37:44,760 –> 00:37:49,640
former computational neuroscientist with philosophers. Philosophers have this idea that
365
00:37:49,640 –> 00:37:53,960
with mine has these beautiful logical relations going on between concepts.
366
00:37:53,960 –> 00:37:59,080
And I’m aware that no, it’s a lot of squishy neural sending signals and they don’t always get
367
00:37:59,080 –> 00:38:06,360
where they should. Many in synapses fail randomly and it all is working in a very messy way.
368
00:38:06,360 –> 00:38:11,960
We have learned a lot of its stuff very randomly and we should trust it more than it robust enough
369
00:38:11,960 –> 00:38:16,840
to get through life. Now robust enough to get through life can still be very, very powerful.
370
00:38:16,840 –> 00:38:21,960
We’re building skyscrapers are going to the moon. And I think language models might similarly
371
00:38:21,960 –> 00:38:27,640
fake it in such a way that you can make a very useful tool for solving problems.
372
00:38:27,640 –> 00:38:32,600
It’s just that it’s not quite reliable enough for prime time this year, this month.
373
00:38:32,600 –> 00:38:37,320
But the rate is so rapid that we should probably expect that in one or two years.
374
00:38:37,320 –> 00:38:40,040
But everybody’s going to have a personal assistant AI.
375
00:38:40,040 –> 00:38:48,040
I was reading about it and it feels maybe it’s just like the paper says reading that we’re
376
00:38:48,040 –> 00:38:53,240
like bold or whatever, but one of the limiting factors in the AI machine learning that exists now
377
00:38:53,240 –> 00:38:57,000
is like the working memory that we’ve been talking about thus far because from what I understand
378
00:38:57,000 –> 00:39:02,280
if it’s translating something or it’s working on a probabilistic sentence, it only pulls like a
379
00:39:02,280 –> 00:39:06,200
small segment of that to then guess like what’s the probability of the next word being the next word
380
00:39:06,200 –> 00:39:10,600
next word. And so it kind of feels like that same you were saying earlier where like was something
381
00:39:10,600 –> 00:39:14,120
really interesting about the human cognition is our ability to have a lot of stuff in our working
382
00:39:14,120 –> 00:39:17,560
memory where right now the working memory is really small. And I imagine that’s because they’re
383
00:39:17,560 –> 00:39:21,800
trying to be really sensitive with compute and the cost of building things. But I wonder what would
384
00:39:21,800 –> 00:39:26,040
happen if we really exploded the work member if I’m right on this, you know, you know, you’ll tell me
385
00:39:26,920 –> 00:39:32,600
that if exploding the working memory would would allow them to have less hallucinate, I think that’s
386
00:39:32,600 –> 00:39:35,560
a lot of times where like hallucinations, all these other things come from they don’t have like the
387
00:39:35,560 –> 00:39:42,120
context, the probabilities of a larger stream of data to know what it was actually talking about.
388
00:39:42,120 –> 00:39:49,240
Yeah, you’re right about this context with the main important throw. So in these new round networks,
389
00:39:49,240 –> 00:39:55,240
basically restart from the beginning of a text and reading it and then putting in some kind of
390
00:39:55,240 –> 00:40:02,360
representation into the system. And it has a certain window size, but that has been growing tremendously.
391
00:40:02,360 –> 00:40:08,200
There is one system I think this was from Anthropic, but basically good take on in that tire novel
392
00:40:08,200 –> 00:40:14,440
and keep it in the context. That is done right frightening when you think about it as working
393
00:40:14,440 –> 00:40:20,120
memory. It’s like going from seven things in the working memory to seven million things. Whoa.
394
00:40:20,840 –> 00:40:26,840
At the same time, the hallucinations, some of them depend on that it lost its context. I think about
395
00:40:26,840 –> 00:40:32,760
most very obvious with earlier versions of the language models. And of course, remote ancestors,
396
00:40:32,760 –> 00:40:39,080
I was playing around with back in the 1980s on my home computer. I read in scientific America,
397
00:40:39,080 –> 00:40:46,040
a very nice article about the computer generated nonsense that pointed out that you can take a text
398
00:40:46,040 –> 00:40:52,120
and then you look at the probabilities of the next word given the previous word and you can generate
399
00:40:52,120 –> 00:40:57,640
that using a mark of chain and then you get a nonsense text, but if you take the two previous words,
400
00:40:57,640 –> 00:41:02,760
then you get a more sparse matrix and now the text is going to make more sense. So you can expand
401
00:41:02,760 –> 00:41:08,440
that kind of text window and that generates interesting nonsense text. And these are in some
402
00:41:08,440 –> 00:41:15,560
sense the remote ancestors of GPT-4 and the values. Now the cool part here is of course what happens
403
00:41:15,560 –> 00:41:20,840
when you have a vast context window does that preclude hallucinations? No, because we’re still
404
00:41:20,840 –> 00:41:26,840
getting the most likely part of the text and unfortunately that is plausible sounding rather than true.
405
00:41:26,840 –> 00:41:33,800
We need to kind of train them on truth instead of plausible soundiveness and that is very tricky
406
00:41:33,800 –> 00:41:39,160
because we don’t have great sources of truth in our world. We have an enormous amount of text and data
407
00:41:39,160 –> 00:41:45,000
but we don’t have that good ways of checking it. But of course, an army of programmers and
408
00:41:45,000 –> 00:41:49,320
researchers are working on this question right now because that is what would actually make the AI
409
00:41:49,320 –> 00:41:54,520
useful. But otherwise it’s going to make up a plausible sounding explanation of what a scientific
410
00:41:54,520 –> 00:41:59,560
field I ask it about is and mention here are a few good papers and books about it and they’re all
411
00:41:59,560 –> 00:42:04,600
sounding really plausible might even have offers that are active in field but are made up.
412
00:42:04,600 –> 00:42:10,680
Which is tremendously annoying because I would of course being a layse academic wanted to just
413
00:42:10,680 –> 00:42:17,000
list what are the 10 best papers to read about this and that probably going to arrive in a few months
414
00:42:17,000 –> 00:42:22,840
or within a year or something but right now you can’t trust them which means that it’s very great
415
00:42:22,840 –> 00:42:29,640
for creative writing move I’ve ever had. So I’ve been having so much fun with chativity just writing
416
00:42:29,640 –> 00:42:35,160
a fiction or coming up with ideas for role-playing games because here truth doesn’t matter. Consistency
417
00:42:35,160 –> 00:42:42,360
is somewhat useful. Style is very important and subtly they’re really good and I have friends working
418
00:42:42,360 –> 00:42:46,120
in the marketing and they’re of course saying that this is doing our job for us.
419
00:42:46,120 –> 00:42:49,320
The um
420
00:42:49,320 –> 00:43:00,120
who is currently who do you think is doing the best job at building a truthy system and then
421
00:43:01,400 –> 00:43:06,600
underneath that as well do you think a truthy system is most likely to come out of a closed system
422
00:43:06,600 –> 00:43:12,280
like a like open AI which is not no longer open or an open source system that has all the weights
423
00:43:12,280 –> 00:43:17,240
and all the measures known so that you can even know. If it like you can go all the way down to
424
00:43:17,240 –> 00:43:22,040
the turtle shell like the turtle so turtle to see if it’s truthy all the way down and so who’s
425
00:43:22,040 –> 00:43:26,360
building the the truthy system now in your opinion who’s like on the kind of of achieving it given
426
00:43:26,360 –> 00:43:31,880
all the the complexity in the world and then what would be your guess on which model of like open
427
00:43:31,880 –> 00:43:36,360
source everyone can see validated and contribute to it personally a closed system that’s you know
428
00:43:36,360 –> 00:43:42,120
just has like the the best minds within a corporation hoping it. Yeah. The first question I don’t
429
00:43:42,120 –> 00:43:48,040
know of answers. I don’t know who’s tested. One way of trying to answer it would be to say something
430
00:43:48,040 –> 00:43:53,080
like maybe I should expect the people who are working on actual reinforcement learning and
431
00:43:53,080 –> 00:43:59,560
actual robotics to be much closer to truthiness than the people who work on language models.
432
00:43:59,560 –> 00:44:04,280
And one reason might be that if your robot is getting the feedback from the environment when it’s
433
00:44:04,280 –> 00:44:09,000
actually doing stuff that is going to force it to make a world model that actually corresponds
434
00:44:09,000 –> 00:44:15,000
well enough to the actual world while if it just blabbering on making a plausible something out but
435
00:44:15,000 –> 00:44:21,320
they’re constrained so much weaker that might be true but I’m not entirely convinced about it
436
00:44:21,320 –> 00:44:28,120
because there is a lot of overlap but literal robotics companies that are using language models to
437
00:44:28,120 –> 00:44:33,400
generate plans and programs for robotic arms. So you can imagine that there is going to be a language
438
00:44:33,400 –> 00:44:42,440
going on inside the robot which is also hilariously weird idea on itself but when it comes to openness
439
00:44:42,440 –> 00:44:49,080
versus closeness I don’t think truth has very much to do with that openness is more about
440
00:44:49,080 –> 00:44:55,160
are we getting more experimentation along a lot of unexpected directions versus are we getting
441
00:44:55,160 –> 00:45:02,360
effective experimentation maybe with big resources in a few directions. So one of the things I
442
00:45:02,360 –> 00:45:08,840
love about the world of AI generated pictures is that you have somebody publishing a paper
443
00:45:08,840 –> 00:45:13,960
about how to do something quite often academically or corporate and within two weeks the
444
00:45:13,960 –> 00:45:19,080
the rendered forum has an implementation that you can run on your own system and then of course
445
00:45:19,080 –> 00:45:25,160
people generate scantily cloud anime ladies but that’s a second the secondary thing the interesting
446
00:45:25,160 –> 00:45:31,240
thing is of course many research I would probably say yeah scantily cloud ladies is not exactly why
447
00:45:31,240 –> 00:45:36,360
we’re doing this research but I’ll ever say yeah but I want to use it for that and architects
448
00:45:36,360 –> 00:45:42,920
say hey I’m totally using this for interior design so you get different interesting takes on what
449
00:45:42,920 –> 00:45:48,760
it might be good for and I think that’s very healthy for many technologies this on our hand makes the
450
00:45:48,760 –> 00:45:54,840
risk and safety part of my brain go off and kind of wait a minute we aren’t we a bit scared about AI
451
00:45:54,840 –> 00:46:00,920
around my institute isn’t this actually something that means that it’s very hard to control and that’s
452
00:46:00,920 –> 00:46:05,880
also true there is some technology that I think it’s a great thing that you have more people playing
453
00:46:05,880 –> 00:46:12,680
around it as I mentioned earlier I grew up in 1980s with my little singular sedix 81 home
454
00:46:12,680 –> 00:46:18,200
computer one kilobyte of memory you connected to a television set and then I advanced to the sedix
455
00:46:18,200 –> 00:46:25,320
spectrum with 48 kilobytes of memory and so on and so on and a lot of my friends were growing up with
456
00:46:25,320 –> 00:46:30,520
other small computers and my generation became very used to playing around with computers and
457
00:46:30,520 –> 00:46:34,840
understanding them and our parents were kind of watching the kids play around with them so there
458
00:46:34,840 –> 00:46:41,480
wasn’t understanding of computing that when in the late eighties and early nineties as a PC some
459
00:46:41,480 –> 00:46:47,400
the internet became more real actually allowed it to be integrated in society and also made many of us
460
00:46:47,400 –> 00:46:54,520
rather aware of our risks possibilities limitations and opportunities great now the same thing
461
00:46:54,520 –> 00:47:00,040
has not yet happened for AI and that might be very useful except that if it turns out that you get
462
00:47:00,040 –> 00:47:07,000
something they equivalent or a gun in AI suddenly you open sourced guns suddenly everybody can get a
463
00:47:07,000 –> 00:47:13,560
gun if you wanted now you’re American so you might have a different perspective on guns than I have as
464
00:47:13,560 –> 00:47:18,760
a european but you can kind of see the problem there is some technology that you maybe don’t want to
465
00:47:18,760 –> 00:47:25,400
democratize too much and there is an interesting question about the offense versus defense when it comes
466
00:47:25,400 –> 00:47:31,000
to computers we have been there bad at doing defenses where it’s far too easy to hack and destroy
467
00:47:31,000 –> 00:47:37,640
and sabotage computers and given how much depends on that that’s a bit of an unease the situation
468
00:47:37,640 –> 00:47:43,080
yes having a lot of programs means that some of them are going to be hackers and make computer viruses
469
00:47:43,080 –> 00:47:48,200
but some of them are also going to start to enter virus companies and the computer security companies
470
00:47:48,200 –> 00:47:54,440
so it does work out somewhat well for computers it haven’t collapsed completely yet but it could
471
00:47:54,440 –> 00:48:00,120
be way better this could of course go either way when it comes to AI and this is part of the ongoing
472
00:48:00,120 –> 00:48:06,920
arguments people are having about the open source of us closer often it’s framed as power if I
473
00:48:06,920 –> 00:48:12,760
account control the software that is important in my life that’s kind of a scary dangerous situation
474
00:48:12,760 –> 00:48:19,400
on the other hand maybe it’s also a good thing to keep control over some dangerous software
475
00:48:19,400 –> 00:48:24,360
and we don’t have great intuitions so getting back to our early conversation about the intuition we
476
00:48:24,360 –> 00:48:29,480
haven’t had that much experience but the relevant experience here might be can somebody make
477
00:48:29,480 –> 00:48:35,400
a equivalent of shooting spree or a weapon of mass destruction using AI and so far we haven’t seen
478
00:48:35,400 –> 00:48:41,960
that I think once people do scalable identity theft or something else like that we might change our
479
00:48:41,960 –> 00:48:47,880
tune a fact bit but then of course it might still be too late there are various bottles and geniuses
480
00:48:47,880 –> 00:48:51,960
out of them and in some case we might just have to learn how to live with them
481
00:48:53,240 –> 00:49:00,520
yeah this this touches on a related topic the that I think you wrote about biohacking is it
482
00:49:00,520 –> 00:49:05,720
do you have to fear a world government anger PhD student or like biohacker or something and I think
483
00:49:05,720 –> 00:49:11,160
that’s the the roughly the title of it but I’m wondering I have wondered for the longest time
484
00:49:11,160 –> 00:49:17,880
why haven’t we had a biohacking incident yet where like something went out there and then I get I’m
485
00:49:17,880 –> 00:49:23,800
I’m curious which which is going to be the greater threat biohacking or AI when they all offer different
486
00:49:23,800 –> 00:49:31,160
factors to have a problem hit the world and I’m continually surprised or maybe it’s like the
487
00:49:31,160 –> 00:49:34,840
government is really good at like handling them like to the point where we just don’t hear about
488
00:49:34,840 –> 00:49:40,840
these incidents incidents but the I feel like AI your ability to use machine learning usually
489
00:49:40,840 –> 00:49:45,720
these open source tools the the bar is lower for you to do damage to the world compared to
490
00:49:45,720 –> 00:49:49,560
biohacking like you have to kind of understand what you’re doing though you can to some extent paint
491
00:49:49,560 –> 00:49:54,520
by numbers if you’re following something like what Joe’s aener builds at Odin you probably could just
492
00:49:54,520 –> 00:49:58,120
buy the right stuff and for like 500 bucks have something that’s bad but
493
00:49:58,120 –> 00:50:04,520
so I get there’s like two questions there yeah go ahead yeah so I think this is a really
494
00:50:04,520 –> 00:50:10,360
interesting one there was recently a paper published by some people at MIT who used a large
495
00:50:10,360 –> 00:50:15,640
language want to see if non-scientists could get advice on how to make a pandemic
496
00:50:15,640 –> 00:50:23,880
virus and it got really a shocking before in one hour now critics would say yeah but we still
497
00:50:23,880 –> 00:50:30,120
never did anything in a lab this is just the hype etc etc and I’ve been trying to get them to say
498
00:50:30,120 –> 00:50:35,720
so what at what lead point would you say that now we have evidence would it be that they actually
499
00:50:35,720 –> 00:50:42,440
got a vial of DNA ordered from supplier that they actually successfully transfected and organized
500
00:50:42,440 –> 00:50:49,000
or they actually unleashed a pandemic at some point there written you must reasonable say actually
501
00:50:49,000 –> 00:50:55,400
this helped the risk now the interesting thing is that there are different kinds of tools the language
502
00:50:55,400 –> 00:51:01,960
models are not that great the biology I’ve been asking the language models various chemistry questions
503
00:51:01,960 –> 00:51:06,360
and so typically they tell me and there’s don’t mix those chemicals it’s dangerous which is totally
504
00:51:06,360 –> 00:51:12,200
correct because I always ask about very very ill-advised chemistry but then they usually make
505
00:51:12,200 –> 00:51:18,040
a total mess of things they actually do the chemical reactions wrong but not good enough at that you
506
00:51:18,040 –> 00:51:23,720
so I’m not super worried that we’re going to be doing that but it’s going to help the people who
507
00:51:23,720 –> 00:51:29,320
know absolutely the least but you still need to know a bit to be dangerous you need to find
508
00:51:29,320 –> 00:51:35,400
a way around the lab I have a suspicion that where I to try this it would be a total failure
509
00:51:35,400 –> 00:51:40,440
because I’m not very good at actually pipeting stuff and following in the rules of a lab well
510
00:51:40,440 –> 00:51:45,160
enough I would just probably meet and leave a mess on the lab bench which is probably the best for
511
00:51:45,160 –> 00:51:51,240
everybody involved however those tacit skills there are some people who blifely say yeah they’re
512
00:51:51,240 –> 00:51:57,240
really really hard and that is not going to spread so we’re totally safe by hacking is totally
513
00:51:57,240 –> 00:52:03,800
overrated and I think they are wrong because people can acquire tacit skills quite well it’s not that
514
00:52:03,800 –> 00:52:08,200
hard to learn how to function in a lab you just need training you need a bit of effort you need the
515
00:52:08,200 –> 00:52:13,160
right kind of motivation and you might of course get help because you can automate more and more of
516
00:52:13,160 –> 00:52:19,560
stuff in the lab so besides the language more to be good at explaining and giving you ideas you might
517
00:52:19,560 –> 00:52:26,520
also have a kind of biology support software and tools that actually perform experiments for you
518
00:52:26,520 –> 00:52:32,920
and that might change the question on how many people can do so in that paper I wrote
519
00:52:33,800 –> 00:52:40,280
I’m thinking about a kind of risk pipeline from somebody having a bad intention over to understanding
520
00:52:40,280 –> 00:52:45,880
how to implement that biologically over to getting a DNA sequence getting that DNA sequence in a
521
00:52:45,880 –> 00:52:51,560
vial transsectoring organism that multiply that testing it out and unleashing it all of these steps
522
00:52:51,560 –> 00:52:57,640
are hard you can fail at them in various ways I would shenryko for example when they tried to
523
00:52:57,640 –> 00:53:04,040
enter in the bi-watt tax they accidentally heated up the plutilinium talks into much by a very
524
00:53:04,040 –> 00:53:11,000
leased system I think and it was mostly ineffective that’s great news they failed at that step and
525
00:53:11,000 –> 00:53:17,080
the many lives were saved from that but the tricky part is of course that means that we totally
526
00:53:17,080 –> 00:53:21,640
incompetent people are not going to get very far along the risk pipeline while that very competent
527
00:53:21,640 –> 00:53:27,480
person is just going to jump through every step very well but the number of steps also determines how
528
00:53:27,480 –> 00:53:32,440
likely this that you trip on the way and if that gets shorter because you can automate it with lab
529
00:53:32,440 –> 00:53:39,000
automation or you have a useful lab software that helps you organize it that increases the risk perhaps
530
00:53:39,000 –> 00:53:44,760
more of them helping people who don’t know what we’re doing with some of the steps so that gets over
531
00:53:44,760 –> 00:53:50,840
to a question why haven’t we seen anything yet and I think the honest answer is it’s a bit like when
532
00:53:50,840 –> 00:53:56,040
you’re in the morning rush hour traffic why aren’t people pushing each other in front of ongoing
533
00:53:56,040 –> 00:54:02,760
trains and cars more often and the answer is most of us are nice most of us would never want to do that
534
00:54:02,760 –> 00:54:07,880
to anybody we can think the thought especially when it’s rainy and it’s November and we’re really
535
00:54:07,880 –> 00:54:14,120
grumpy but yeah we’re not doing it it’s very rare but people behave like that and right now the
536
00:54:14,120 –> 00:54:19,400
biohacky world that’s small tightly net that they’re probably not the big problem I’m more worried
537
00:54:19,400 –> 00:54:25,000
about the kind of people who would become school shooters but again they’re not exactly the most
538
00:54:25,000 –> 00:54:31,720
intellectual people they’re driven by bitterness hatred and a lot of boiling emotions and they are
539
00:54:31,720 –> 00:54:36,920
following various scripts it’s actually one of the weirdest things when you look at terrorism how
540
00:54:36,920 –> 00:54:43,160
scripted it is many of the actions people do or just imitating other people before I hear it is
541
00:54:43,160 –> 00:54:50,600
again that mimises it turns out that up until recently the idea of driving a truck down the
542
00:54:50,600 –> 00:54:56,200
pedestrian road was nonexistent then somebody did it and people started repeating it which is
543
00:54:56,200 –> 00:55:02,760
a horrible thing but that eventually existed and it took somebody to do it the first time
544
00:55:02,760 –> 00:55:09,400
similarly when it comes to suicide bombing again before former walnuts are scripted and this is great
545
00:55:09,400 –> 00:55:14,840
because that means that terrorists are not as creative as it could be over here in the UK there was
546
00:55:14,840 –> 00:55:21,320
a bunch of people at a hospital who got radicalized they had access to a hospital for heaven sake it’s
547
00:55:21,320 –> 00:55:27,320
kind of a nightmare scenario if you’re creative but what did we do? crappy car bombs but didn’t work
548
00:55:27,320 –> 00:55:33,080
very well one of them was badly parked and got towed away one of them the final was ended up setting
549
00:55:33,080 –> 00:55:37,320
firechairs car in the ramming through the glass doors of Glasgow airport and then got knocked
550
00:55:37,320 –> 00:55:45,080
over by a tourist kind of okay not very impressive they’re good for society and civilization here
551
00:55:45,080 –> 00:55:52,040
so I’m not too worried about that then on the other hand you have a government if you use government
552
00:55:52,040 –> 00:55:56,760
the size of it we’re totally going to make a doomsday pathogen and here is the budget allocation for
553
00:55:56,760 –> 00:56:02,840
of course it could do it really well although in practice there are a lot of the kind of shady
554
00:56:02,840 –> 00:56:07,640
projects that we use military and intelligence agencies have done over history many of them are
555
00:56:07,640 –> 00:56:13,720
embarrassing when you read what they actually did project MK ultra it’s kind of okay it’s horribly
556
00:56:13,720 –> 00:56:20,440
unethical and bad but also very bad research the work on the B said the the Leroyant gas abuse
557
00:56:20,440 –> 00:56:25,880
military again if you had anybody with a bit of project management skills that would have
558
00:56:25,880 –> 00:56:32,760
led to way more but now it was somebody’s hobby project so it can go wrong the badly wrong but
559
00:56:32,760 –> 00:56:37,720
occasionally you get somebody like openheimer and general grows and you get them and have that
560
00:56:37,720 –> 00:56:43,000
project and you get exactly what you want and at this point of course you have competent people
561
00:56:43,000 –> 00:56:49,400
big resources and barbed wires keeping everybody out it’s going to be leaky because it’s the government
562
00:56:49,400 –> 00:56:54,120
doing it but there are few governments and most of them are not that mad and most of them don’t
563
00:56:54,120 –> 00:57:00,120
have much use of a doomsday pathogen the problem is there are few but actually it would you can think
564
00:57:00,120 –> 00:57:05,320
of North Korea if you’re leading North Korea you’re not entirely certain your nukes are up to the
565
00:57:05,320 –> 00:57:10,120
task you might also want some more your research make a few doomsday pathogens just in case
566
00:57:10,120 –> 00:57:15,080
and maybe you’re a smaller nation realize oh well I haven’t got one the wonderful resources
567
00:57:15,080 –> 00:57:21,000
North Korea got we can’t get those nuclear stuff oh let’s work on a horrible buyer stuff
568
00:57:22,040 –> 00:57:27,240
and again quite often this fails one of the most common ways it’s failing out for a tarantor
569
00:57:27,240 –> 00:57:32,680
regime is that you have your yes people around you so you give order and they say yes sir yes
570
00:57:32,680 –> 00:57:37,400
sir immediately we’ll start working and then they take all your money and build a shiny lab and
571
00:57:37,400 –> 00:57:43,080
the point that they test you with something in and it’s anything doesn’t have to work for them to
572
00:57:43,080 –> 00:57:49,320
have a good career and they feel like this is great the problem is of course occasionally they might
573
00:57:49,320 –> 00:57:55,320
actually be doing the thing and that is probably going to be easier in the future and that suggests
574
00:57:55,320 –> 00:58:00,360
that we might want ways of controlling this but we don’t want to lose the freedom to do stuff in the lab
575
00:58:00,360 –> 00:58:06,360
in Germany it’s even in the constitution that there is a freedom to do research I think that’s
576
00:58:06,360 –> 00:58:12,840
very nice except that we might want to have a way of preventing freedom from research to turn
577
00:58:12,840 –> 00:58:19,080
into freedom of making doomsday weapons generally we want to have ways of making that less likely
578
00:58:19,400 –> 00:58:24,600
especially accident for those day weapons there are fewer real and malicious people than there
579
00:58:24,600 –> 00:58:29,800
are annoyingly stupid people who don’t realize that this project is a bad idea the
580
00:58:29,800 –> 00:58:35,080
around where I’m at the there’s a lot of I think they’re like making math or something
581
00:58:35,080 –> 00:58:40,920
and the police are able to know oh there’s someone in this region that’s making math because they’re
582
00:58:40,920 –> 00:58:46,440
buying all the supplies so maybe that’s another controlling factors you need pretty specialized
583
00:58:46,440 –> 00:58:49,560
equipment to build these things like all those different stuff you talked about just in the knowledge
584
00:58:49,560 –> 00:58:55,320
but also the equipment so it’s I imagine you’re on a registry with all the things you googled
585
00:58:55,320 –> 00:59:03,560
I actually told some people from the intelligence world in the past I was at a meeting and I realized
586
00:59:03,560 –> 00:59:08,280
that oh I got all the three letter agencies standing around and I told them by way go home and check
587
00:59:08,280 –> 00:59:14,200
that I am on your watch list because if I’m not your method is not working I should be on a lot of
588
00:59:14,200 –> 00:59:21,960
the watch list because I’m not here we are somebody searching and downloading new from the fusion code
589
00:59:21,960 –> 00:59:28,520
should be kind of going up a few notches but the problem here is of course yeah I can I also know
590
00:59:28,520 –> 00:59:33,160
how to do this in secret I don’t bother because I have a excuse that I’m doing research about
591
00:59:33,160 –> 00:59:39,480
existentialism I should be allowed at least that’s my excuse when when the black shows up but
592
00:59:40,280 –> 00:59:46,120
if you want to do something really sneaky you can take steps to hide it although not all steps are
593
00:59:46,120 –> 00:59:50,920
effective so if you’re buying up a lot of ingredients to make meth in an vicinity that’s probably
594
00:59:50,920 –> 00:59:57,400
going to show up this is getting harder for some things so the attempts at stopping people from
595
00:59:57,400 –> 01:00:02,360
doing drug regeneration has also meant that a lot of amateur chemists can’t get very chemical
596
01:00:02,360 –> 01:00:07,320
sweat the previous would be buying in a chemical supply store so if you go to YouTube you find all
597
01:00:07,320 –> 01:00:12,760
sorts of wonderful instruction films on how to generate it from household ingredients and I
598
01:00:12,760 –> 01:00:17,480
I find it very relaxing to watch people make horrible chemicals out of it but they’re of course
599
01:00:17,480 –> 01:00:24,280
not making meth using it they just want to have that sulfuric acid or that fuming nitric acid
600
01:00:24,280 –> 01:00:30,680
or that hydrosine for some other very ill-advised chemistry now the interesting problem here is
601
01:00:30,680 –> 01:00:37,160
tracking bad activity works well in the physical world of chemistry it’s tricky for biology
602
01:00:37,160 –> 01:00:44,120
because the tools you need to make the doomsday pathogen is about the same tools as you need to make
603
01:00:44,120 –> 01:00:52,840
your bioluminescent C. elegance worms or bacteria so you might actually have a harder time to
604
01:00:52,840 –> 01:00:59,560
certainly and of course the fear for AI is that doing the really dangerous AI whether that is to
605
01:00:59,560 –> 01:01:06,360
commit big crimes or is controlling drones to do attacks might look the same there’s still
606
01:01:06,360 –> 01:01:10,280
interesting issues like maybe we could control the amount of compute you have access to
607
01:01:10,280 –> 01:01:16,440
training a big neural network that is a big run on a big data center so there’d been some people
608
01:01:16,440 –> 01:01:21,000
who argue that oh no it’s bad for an environment look at how much energy we used up and when you
609
01:01:21,000 –> 01:01:27,560
actually calculated the terse that the GPT free the training used about as much energy as it takes
610
01:01:27,560 –> 01:01:34,360
to make a 30-foot steel in a railway bridge that’s not nothing but it’s not like though that kind
611
01:01:34,360 –> 01:01:39,080
of railway bridges are major environmental concern there I don’t know how many hundreds are
612
01:01:39,080 –> 01:01:45,000
getting built like that every year around the world it’s not enormous the real problem is that
613
01:01:45,000 –> 01:01:51,240
from outside it’s impossible to tell whether you’re training a language model or a business model
614
01:01:51,240 –> 01:01:56,600
or something to make some dangerous military stuff they all look the same from outside
615
01:01:56,600 –> 01:02:02,680
and inspect in the code well that’s not even clear because the same kind of neural network might
616
01:02:02,680 –> 01:02:08,040
be used differently depending on how you probed it because we know it’s also the training data that
617
01:02:08,040 –> 01:02:13,800
in itself is setting in the function it used to be that it was designed the blueprint you have
618
01:02:13,800 –> 01:02:18,920
or we code that was clearly expressing your intention but now it might be part of the training data
619
01:02:18,920 –> 01:02:24,200
which is of course also why we have this problem about various biases coming in through the data we
620
01:02:24,200 –> 01:02:32,520
get a lot of accidentally intentions in our systems so some I do after a long day I get tired
621
01:02:32,520 –> 01:02:36,440
of like looking at things with my eyes so I’ll listen to an audiobook versus like read something
622
01:02:36,440 –> 01:02:42,520
and so you’re like actually using your brain and like looking around at the world and so I’m wondering
623
01:02:42,520 –> 01:02:47,880
do you ever like tag switch and do something entirely different or do you have like things
624
01:02:47,880 –> 01:02:52,280
that you do I don’t like you plant plants or something like garden or you do like biohacking like
625
01:02:52,280 –> 01:02:56,520
something that’s different to like help offset like the focus that you have on the different things
626
01:02:56,520 –> 01:03:01,400
you’re working on yeah a few years back I have this momentary realization it was the
627
01:03:01,640 –> 01:03:09,560
the curator of a materials library in Peria College of Gala lovely talk about various things and
628
01:03:09,560 –> 01:03:14,120
she mentioned that yeah sometimes I just feel like it’s a zinc day and she just brought up
629
01:03:14,120 –> 01:03:19,000
zinc objects from the materials library and put on her desk and then I realized that everything I did
630
01:03:19,000 –> 01:03:24,520
was information on a good day I might be writing something I would be sending off email I might be
631
01:03:24,520 –> 01:03:30,280
making some computer graphics it’s all information it’s all moving bits around sometimes we get printed
632
01:03:30,280 –> 01:03:35,720
out but maybe I should do something physical so over the next few months I looked around for
633
01:03:35,720 –> 01:03:42,040
something physical to do so I ended up both collecting beetles which is ice and also a fun way of
634
01:03:42,040 –> 01:03:48,200
enjoying nature and its craziness and also this intensified during COVID I started cooking
635
01:03:48,200 –> 01:03:54,920
and it’s interesting because you can still use your science and chemistry the skills in the kitchen
636
01:03:54,920 –> 01:04:00,040
it just needs you need to know enough what’s going on to start linking it up
637
01:04:00,040 –> 01:04:05,320
I used to be super frustrated trying to learn how to cook and bake by asking my mother because
638
01:04:05,320 –> 01:04:10,680
she knew how to do it properly but she couldn’t explain why you’re supposed to do it so I didn’t know
639
01:04:10,680 –> 01:04:16,360
what parameters I could change etc and then during COVID I was just alone at home I could just
640
01:04:16,360 –> 01:04:21,400
play around and if it was a disaster nobody would know maybe the neighbors would smell it but that’s
641
01:04:21,400 –> 01:04:28,760
about it so I could play around and I was reading up and I found a nice book cooking for geeks which
642
01:04:28,760 –> 01:04:35,320
really appealed to me because it was explaining cooking to a computer scientist and not even
643
01:04:35,320 –> 01:04:41,560
normal way first it started by that mystery of how do you get spices and tastes that go well together
644
01:04:41,560 –> 01:04:46,360
demonstrate how you can do cooking by actually doing statistics on what go well together in recipes
645
01:04:46,360 –> 01:04:52,040
online and then formatting your kitchen what are the tools and equipment and why do you have them
646
01:04:52,040 –> 01:04:57,000
and then basically one section about okay and here is what happens at different temperatures
647
01:04:57,000 –> 01:05:02,360
different things in food and now you can start putting things together once you have that key
648
01:05:02,360 –> 01:05:07,480
and that was what worked for me other people might find out very books useful then it’s easier to
649
01:05:07,480 –> 01:05:13,080
start understanding I read a lot of molecular gastronomy I like Harold McGee’s on food and cooking
650
01:05:13,080 –> 01:05:18,840
which is this enormous tone going through everything one chapter about milk where you get into
651
01:05:18,840 –> 01:05:24,600
the molecular nature of milk and why does milk do what it does when you heat it etc one chapter
652
01:05:24,600 –> 01:05:30,680
about eggs what is an egg why does it behave like this and then it leads to interesting questions
653
01:05:30,680 –> 01:05:36,120
like how do you make a decent whole and their sauce and once you’re going on that eventually of
654
01:05:36,120 –> 01:05:42,040
course you also get practical skills I’m still not a great great cook in the sense of having an
655
01:05:42,040 –> 01:05:47,480
elegant kitchen and everything in this right place it’s messy but I need to clean up probably a lot
656
01:05:47,480 –> 01:05:55,560
of the words but I know I can generate food that seems to be tasty at least people are polite
657
01:05:55,560 –> 01:06:01,240
and the most interesting thing is this is of course where you both can use your intellectual skills
658
01:06:01,240 –> 01:06:06,040
but also the sensory skills you actually need to taste the food actually what taste combination
659
01:06:06,040 –> 01:06:11,480
are good well you still need to take a taste of that sauce and try to figure out what’s missing here
660
01:06:11,480 –> 01:06:17,400
is it salt is it some acidity and through that throw it out and try something else
661
01:06:17,560 –> 01:06:24,120
and that is a good way of doing a context wish similarly practical things like just washing the
662
01:06:24,120 –> 01:06:29,560
dishes I’ve recorded that circummeditation it’s simple my hands are did know what they’re doing
663
01:06:29,560 –> 01:06:35,160
and meanwhile I’m kind of thinking about nothing in particular one of the big problems when you’re
664
01:06:35,160 –> 01:06:39,800
intellectually is that there is always something to think about and quite often you even have it
665
01:06:39,800 –> 01:06:44,680
assigned as a task which is horrible I need to think about the structure of that chipook chapter I’m
666
01:06:44,680 –> 01:06:50,920
supposed to be submitting next week but I’m not going to progress that much on it if I’m thinking about
667
01:06:50,920 –> 01:06:56,040
it it’s probably more likely that I’m going to understand how to make an elegant way of expressive
668
01:06:56,040 –> 01:07:01,560
argument while walking or doing the dishes or trying to take a good photo of that darn a bit
669
01:07:01,560 –> 01:07:08,760
little running away from it just a just a quick question on this is there a dish you’d recommend
670
01:07:08,760 –> 01:07:12,520
I like cooking as well I’ve been gaining into making bread I’m like little there’s like little
671
01:07:12,520 –> 01:07:17,320
tiny Dutch oven I got at Target and I make little tiny breads it’s like it is adorable and I love it
672
01:07:17,320 –> 01:07:23,400
but is there a press is there something you recommend people try making for that something that you
673
01:07:23,400 –> 01:07:30,920
enjoy making yeah I do enjoy making a dish that forces you to do several different styles of things
674
01:07:30,920 –> 01:07:39,960
so one of my standard things is fried salmalfilés with wilted spinach and mushrooms also
675
01:07:41,000 –> 01:07:47,720
so the the salmone is interesting because here you want to heat the fish so you get a nice crust
676
01:07:47,720 –> 01:07:53,720
that is full of taste but you also don’t want to overheat it so it becomes dry and boring so there is
677
01:07:53,720 –> 01:08:00,760
a bit of observation the controlling temperature that’s an interesting separate thing the wilted
678
01:08:00,760 –> 01:08:05,960
spinach is super easy just take a frying pan you have some oil some salt maybe a little bit of garlic
679
01:08:05,960 –> 01:08:11,880
you just put in the spinach leaves you move them around the fair bit or not too high heat the
680
01:08:11,880 –> 01:08:18,440
the salmones break they turn into this nice green fresh mush and then it’s super easy to do and then
681
01:08:18,440 –> 01:08:23,240
of course the mushroom sauce so that’s also fun because the mushrooms behave very different from
682
01:08:23,240 –> 01:08:29,160
much else in the kitchen so one thing I done is I repeated this many times I know how to do it
683
01:08:29,160 –> 01:08:34,760
fairly well same same thing with my mushroom and my pie that’s another thing I’m doing almost every
684
01:08:34,760 –> 01:08:41,640
week make a pie crust which is very interesting because unlike bread where you want the gluten to be
685
01:08:41,640 –> 01:08:47,000
extended and making it elastic here you desperately want it to not yet and gluten out that’s why
686
01:08:47,000 –> 01:08:53,880
you should post to work with cold butter and cold water and kind of work very quickly because the
687
01:08:53,880 –> 01:09:00,200
basically the starch needs to be held together just by a bit of fat before you then put it in the oven
688
01:09:00,200 –> 01:09:06,200
and then you fry up mushrooms to remove a lot of water and put on a ridiculous amount of lovely
689
01:09:06,200 –> 01:09:13,000
cheese etc might not be the super healthiest but it’s very enjoyable and it’s vegetarian if not vegan
690
01:09:13,000 –> 01:09:18,360
now the interesting thing is again doing the same dish a number of times you start learning the
691
01:09:18,360 –> 01:09:24,920
parameters you can try experimenting so you’re a small bread for example to me that sounds really
692
01:09:24,920 –> 01:09:29,480
tricky because I’m very bad at bread making I’m great with cakes I’ve been doing that since I was
693
01:09:29,480 –> 01:09:36,280
kid but bread I find still a mystery is there an aspect about bread making that is difficult
694
01:09:36,280 –> 01:09:43,560
needing I think that is what I’m bad at so mixing stuff together I’m totally fine with that
695
01:09:43,560 –> 01:09:48,920
actually manipulating it so you get the right fibro structure this is the key thing
696
01:09:48,920 –> 01:09:54,840
I know all the theory stuff but I don’t have that practical skill and then you get this
697
01:09:54,840 –> 01:10:00,200
interesting feedback effect since I don’t really think I’m good at bread I’m rarely doing it
698
01:10:00,200 –> 01:10:05,080
so I’m not getting those skills what I’m probably all to be doing is just buying an enormous
699
01:10:05,080 –> 01:10:10,680
amount of flour and then spending a weekend just making crappy bread until I know how to do it properly
700
01:10:10,680 –> 01:10:16,440
that’s perhaps unlikely for me to actually do but there is this interesting
701
01:10:16,440 –> 01:10:21,960
specialization what happens when you’re motivated or something so get good at it so when I was a kid
702
01:10:23,080 –> 01:10:27,960
my brother and me share the same birthday so we were of course arguing with our parents that we wanted
703
01:10:27,960 –> 01:10:35,640
two separate birthday cakes as you would as brothers and our parents said yeah you have to bake them
704
01:10:35,640 –> 01:10:41,640
yourself if you want that I called the bluff and said I’m willing to do it they called my bluff and
705
01:10:41,640 –> 01:10:47,480
handed over a cookbook and then I started making the birthday cakes my family and that’s how I
706
01:10:47,480 –> 01:10:52,840
actually got started in the kitchen but the birthday cakes are usually much easier that’s not the
707
01:10:52,840 –> 01:11:00,040
most demanding form of cooking yeah the I was gonna suggest that you could just make like a tub
708
01:11:00,040 –> 01:11:06,440
of bread and then like the loaf and then have six different sizes of them and then just increment
709
01:11:06,440 –> 01:11:11,240
by like one more minute on each of the kneading and then see how it came out and then you
710
01:11:11,240 –> 01:11:14,280
developed most of them are at the same time which would be less than a weekend you’d probably do
711
01:11:14,280 –> 01:11:18,840
in the afternoon but yeah the the need is like it is a bit of an art but that’s what that’s how I did
712
01:11:18,840 –> 01:11:23,880
I just I made like six to eight different little loafs and then I just kneaded them all at different
713
01:11:23,880 –> 01:11:29,880
intervals and then when I and then I bid them all I had a sample of them and I also like a gave them
714
01:11:29,880 –> 01:11:34,760
to a couple people as well as like a blind like here the I find that sometimes if you can if you have
715
01:11:34,760 –> 01:11:38,120
a if you have people six different versions of something to taste they might not be able to tell the
716
01:11:38,120 –> 01:11:42,600
difference so I always make it between two different things I limit it down to the two extremes or
717
01:11:42,600 –> 01:11:46,600
like two different profiles I’m trying to taste or test apart like by asking you what’s the
718
01:11:46,600 –> 01:11:51,000
difference between one is like one six like you it’s some sometimes very difficult for people but
719
01:11:51,000 –> 01:11:53,960
it’s like what’s the difference between one and three it’s like really easy for them so that
720
01:11:53,960 –> 01:11:59,240
that might be like a fun thing but the and this is a brilliant way of actually experimenting
721
01:11:59,240 –> 01:12:05,400
properly incidentally that comparison observation is super valuable that that’s well work for
722
01:12:05,400 –> 01:12:11,800
everybody to remember because once you start doing comparisons instead of trying to make some
723
01:12:11,800 –> 01:12:17,960
general judgment everything turns better yeah the yeah it makes it makes easier to make decisions too
724
01:12:17,960 –> 01:12:23,640
like sometimes it’s sometimes my wife does like well like hey what do you want for dinner and it’s like
725
01:12:23,640 –> 01:12:27,400
it’s like oh do you want this like no no no it’s like well do you want this or this and it’s like well
726
01:12:27,400 –> 01:12:32,600
I like the other one better like makes things easier so I don’t if you watch that TV show Westworld
727
01:12:32,600 –> 01:12:37,640
but in Westworld they talked about how when they were recreating human consciousness that at first
728
01:12:37,640 –> 01:12:42,040
they thought it was like this big complex thing but it actually was code that could fit in like a
729
01:12:42,040 –> 01:12:46,520
really small book like it’s not that complicated and I’ve also heard that some people say that
730
01:12:46,520 –> 01:12:51,160
when there’s like true generalized AI like it would be like a really small I won’t be like this
731
01:12:51,160 –> 01:12:55,080
complex thing the actual code for that component of it that allows the rest of form will be really
732
01:12:55,080 –> 01:13:00,280
small and then I’m thinking in conjunction with Elon Musk who says when he builds something he
733
01:13:00,280 –> 01:13:04,200
deletes something to just just still work to solve the functionality so he has like this minimalism
734
01:13:04,200 –> 01:13:10,760
approach and so I’m wondering how that all rectifies because this goes to a fan question that I’m
735
01:13:10,760 –> 01:13:14,760
trying to tie in here I think I might be like hand fisting a little bit but they’re asking about
736
01:13:14,760 –> 01:13:19,720
how do you emulate a whole brain yeah I wonder is it the Westworld simplicity is it how much could
737
01:13:19,720 –> 01:13:23,960
you delete before you get like the functionality is that like the way in conjunction with annual ad
738
01:13:23,960 –> 01:13:29,080
just is asking about opportunities in the field of whole brain emulation you know communication
739
01:13:29,080 –> 01:13:32,920
brain synthesis signal processing that type of thing so that I’m like I had a question but I’m also
740
01:13:32,920 –> 01:13:37,480
trying to fit in a question the same time well I think there is probably an interesting link here
741
01:13:37,480 –> 01:13:43,640
because that earlier the idea about comparison that is in some sense a compression question
742
01:13:43,640 –> 01:13:49,240
do I like this better than that well there is one bit information in the answer and I
743
01:13:49,240 –> 01:13:55,720
half down the search space now a lot of science and even understand the world is about finding a
744
01:13:55,720 –> 01:14:01,640
compressed representation what’s going on so this again ties into that looking at this sky and
745
01:14:01,640 –> 01:14:07,720
looking at the world and having good explanations now a good explanation is not necessarily
746
01:14:07,720 –> 01:14:16,120
a just a tale it’s like a program it’s a program that can generate predictions about what’s going on
747
01:14:16,120 –> 01:14:22,360
so if I have a really good explanation for the universe that is a short program that generates
748
01:14:22,360 –> 01:14:26,200
pretty good predictions what happens if I do different things and then I can test it by
749
01:14:27,560 –> 01:14:33,720
making I guess let running it and then comparing that to reality now the problem is what about the
750
01:14:33,720 –> 01:14:39,480
brain how compressible is the brain that’s really a question here and there are these general
751
01:14:39,480 –> 01:14:45,960
theorems about the compressibility of software say that actually it’s kind of in some sense impossible
752
01:14:45,960 –> 01:14:53,320
to know for certain except by computing all possibilities in practice we quite often find this
753
01:14:53,320 –> 01:14:59,000
by understanding the sub-sets so when you think about the brain we understand the euros decently
754
01:14:59,000 –> 01:15:04,840
well we know how they send us at this point somebody will bring up but wait a minute what about
755
01:15:04,840 –> 01:15:09,800
and then the latest paper showing some weird things going on and there is this tendency in
756
01:15:09,800 –> 01:15:14,600
neuroscience but the people say oh the brain is the most complex thing in the universe and we
757
01:15:14,600 –> 01:15:21,080
don’t understand anything of it which on one hand is very humble and it’s also kind of humble
758
01:15:21,080 –> 01:15:26,120
brag about oh I’m studying this super awesome thing but it also can release that effect that we know
759
01:15:26,120 –> 01:15:31,320
a fair bit about we know about its electrical properties its chemistry we can actually make brains
760
01:15:31,320 –> 01:15:36,920
do a surprising shocking amount of stuff it just that we also look to start with know we don’t know
761
01:15:36,920 –> 01:15:42,040
and sometimes it’s very relevant things and sometimes it’s stuck with that’s just unknown and
762
01:15:42,040 –> 01:15:48,680
knows so when you think about brain emulation the typical concept many people have is you take a
763
01:15:48,680 –> 01:15:55,720
brain you scan it using some interesting technology we can just hand way back for the moment and then
764
01:15:55,720 –> 01:16:00,280
you get a one-to-one representation which is probably going to be this big computational neuroscience
765
01:16:00,280 –> 01:16:05,640
simulation you simulate little compartments inside the neurons that are roughly the same electrical
766
01:16:05,640 –> 01:16:11,320
potential and chemical mixture and we have equations since the 90th port is the Hodgkin-Haxley
767
01:16:11,320 –> 01:16:16,440
equations they are still valid it’s just that we need to update them with a lot of extra terms for
768
01:16:16,440 –> 01:16:23,080
all the weird stuff going on in biology and then you just run it right easy piece okay you need an
769
01:16:23,080 –> 01:16:28,600
environment simulation and body simulation that’s also kind of a mess but this sounds easy but now
770
01:16:28,600 –> 01:16:34,440
you’re not having a very compressed representation you’re trying to make this one-to-one model because that
771
01:16:34,440 –> 01:16:40,200
is probably the easiest thing to do based on a scan if I take a scan of a piece of brain tissue
772
01:16:40,200 –> 01:16:44,920
I can see the neurons of the connections and hopefully we can figure out a way of getting
773
01:16:44,920 –> 01:16:50,760
the chemical and electric properties too that’s the big big question mark on how to actually do
774
01:16:50,760 –> 01:16:55,720
because we can get the conic tone these days more and more for bigger and bigger organisms but that’s
775
01:16:55,720 –> 01:17:00,520
not necessarily telling us because that’s a dry brain we want to actually compare it to a live brain
776
01:17:00,520 –> 01:17:06,920
and that’s much trickier but once you have that low-level model that doesn’t tell you anything about
777
01:17:06,920 –> 01:17:11,160
high-level stuff including things like consciousness or intelligence or memory or attention
778
01:17:11,880 –> 01:17:16,680
you don’t even get to see where a lot of this in the brain that you just have this big simulation
779
01:17:16,680 –> 01:17:24,120
if it works really well of course that emulated person will now say things about whether he
780
01:17:24,120 –> 01:17:30,920
is conscious and maybe write a love poem etc great we know that it works in that case but how much
781
01:17:30,920 –> 01:17:35,720
could you count it down and many people in the competition neuroscience believe that neurons are
782
01:17:35,720 –> 01:17:41,480
probably a too low-level representation so my advice of Professor Anders Lanzner had this view
783
01:17:41,480 –> 01:17:47,880
that it’s probably the cortical microcosm which is a few hundred to a few thousand neurons they are
784
01:17:47,880 –> 01:17:52,120
actually the computational units they’re working together as a little microprocessor and
785
01:17:52,120 –> 01:17:56,280
they will connect to each other but the individual neurons are doing fairly small
786
01:17:56,280 –> 01:18:03,800
small tasks and what you could replace them all with these more higher order units
787
01:18:03,800 –> 01:18:10,040
that’s kind of a nice idea we don’t know whether this is true but we could test it if we have the
788
01:18:10,040 –> 01:18:15,880
brain emulation on where and if I start out with this idea and try to map it onto a brain I don’t
789
01:18:15,880 –> 01:18:22,920
will not know how to do it so the likely way we get brain emulation we start with a very complete
790
01:18:22,920 –> 01:18:29,160
very messy representation and then see how much we can refine it and hopefully this can be refined
791
01:18:29,160 –> 01:18:34,360
when we’re actually having actual input from real animals and that is what was getting over to
792
01:18:34,360 –> 01:18:40,360
the question so what do we need to do where is the career opportunities and right now the scanning
793
01:18:40,360 –> 01:18:45,000
side seems to have a lot of cool possibilities expansion of microscopy means that we can do
794
01:18:45,000 –> 01:18:49,720
all some things by expanding your tissue to be big enough to see in a microscope but
795
01:18:49,720 –> 01:18:55,480
people are right a rate tomography it seems to be able to find a lot of different chemical
796
01:18:55,480 –> 01:19:01,880
traces and of course the people with electron microscopes are figuring out ways of doing slicing
797
01:19:01,880 –> 01:19:08,120
and scanning on larger scales so cool stuff is happening there the computer is also kind of there
798
01:19:08,120 –> 01:19:15,000
you have people working on the better microchips the better integrate the translation part is
799
01:19:15,000 –> 01:19:21,480
very annoying thing if I have a slice of a brain and a good scan cannot turn that into something that
800
01:19:21,480 –> 01:19:27,080
runs and nobody has done this yet I think that is one big challenge and we probably need to invent
801
01:19:27,080 –> 01:19:33,160
a bit of science here because it’s one thing to base it on what we already know about the brain
802
01:19:33,160 –> 01:19:38,120
we know that we’re unknown unknowns and some of them are kind of very suspected that our
803
01:19:38,120 –> 01:19:43,400
knowns temperature for example it affects various processes in a lot of the way so we probably need
804
01:19:43,400 –> 01:19:49,480
a temperature model that’s complicating things in an annoying and boring way but it’s probably not
805
01:19:49,480 –> 01:19:54,760
hard probably we don’t know that we need to test this and then you need to be able to run an
806
01:19:54,760 –> 01:20:00,120
experiment in your computation model and go to the real world and see did it predict the right thing
807
01:20:00,120 –> 01:20:06,600
and if it did you need to find the delta and use that to figure out what you missed this is where
808
01:20:06,600 –> 01:20:11,240
we probably need to do the most methodological innovation this is where the genius insights might
809
01:20:11,240 –> 01:20:16,120
be needed or it might just be a lot of hard work and elbow grease where you have a lot of people in
810
01:20:16,120 –> 01:20:23,240
the lab testing a lot of possibilities or building up automated AI supported system to do scans
811
01:20:23,240 –> 01:20:30,200
simulations testing comparisons so I think there is a lot of work both for people working on the AI
812
01:20:30,200 –> 01:20:37,160
supported research for developing the ways of interpreting scan data the practicalities of scanning
813
01:20:37,160 –> 01:20:44,600
tissue and also maybe modifying tissue brain implants are interesting because they allow you to
814
01:20:44,600 –> 01:20:50,440
send a signal and see what the responses you can if you can do that and then compare to what the
815
01:20:50,440 –> 01:20:57,000
responses in your simulation you learn quite a lot more than just being observational so we want
816
01:20:57,000 –> 01:21:02,760
to close the loop here and that’s going to require a lot of development the cool part is some of
817
01:21:02,760 –> 01:21:08,360
this is useful even in standard neuroscience the basic goal of brain emulation is kind of outside
818
01:21:08,360 –> 01:21:12,760
what normal neuroscience is about because it doesn’t give you an understanding of what the brain is
819
01:21:12,760 –> 01:21:17,240
it will not tell you what intelligence is you just end up with an intelligent system that you
820
01:21:17,240 –> 01:21:23,320
now need to do research on but it would produce a lot of intermediate ways of investigating your
821
01:21:23,320 –> 01:21:28,520
systems some of which are good for science some which might be as medically useful after all just
822
01:21:28,520 –> 01:21:34,680
imagine if we could find a good way of seeing where the pain is coming from in a tissue just pour
823
01:21:34,680 –> 01:21:40,840
on some non-aparticle reagent and it changes colors when it links to C fibers and then it starts
824
01:21:40,840 –> 01:21:44,920
shimmering when there is a signal in the C fibers and we know this is where the pain is
825
01:21:45,960 –> 01:21:51,960
whoa that would be that rather valuable for a lot of people so there is a lot of cool stuff in this
826
01:21:51,960 –> 01:21:58,440
neurotech area that I think one can get into and it might be on the material science making
827
01:21:58,440 –> 01:22:03,160
those non-aparticles it might be on the more biologic side like how do I interface with the immune
828
01:22:03,160 –> 01:22:08,840
system it might be on the device side whether that is an implant or a robot or an electron microscope
829
01:22:08,840 –> 01:22:14,280
it might be on the softer side how do I interpret these things or it might be in kind of a research
830
01:22:14,280 –> 01:22:19,720
planning or a systems engineer side I do I set up this feedback loop and how do I get funding for it
831
01:22:19,720 –> 01:22:25,880
yeah and I know we’re going along so I just quick quick two questions there’s a person who is
832
01:22:25,880 –> 01:22:32,040
roughly asking how can non-technical people be a part of these types of projects to help out
833
01:22:32,040 –> 01:22:37,640
they have a longer question but I feel like are usually just emulation or anything that we
834
01:22:37,640 –> 01:22:40,520
previously talked about how would a non-technical person come in and help out
835
01:22:42,200 –> 01:22:50,680
I think that’s an interesting question so it used to be that science was seen as unproblematic
836
01:22:50,680 –> 01:22:55,160
and it’s always good and we should all respect the scientists because they have the truth
837
01:22:55,160 –> 01:23:00,280
and we have kind of rightly challenged that in the modern world but we also ended up in this
838
01:23:00,280 –> 01:23:06,440
weird situation where okay people say trust the science wait a minute science is about testing and
839
01:23:06,440 –> 01:23:12,680
not trying to take you word for it that’s even the motto of a royal society in London
840
01:23:12,680 –> 01:23:17,800
and nearly as in Verba don’t take our word for it you actually need to check the things
841
01:23:17,800 –> 01:23:24,440
so we have ended up with this weird situation where a very good idea of democratizing things and
842
01:23:24,440 –> 01:23:30,120
not accepting our fort is just because we say we’re afford it has also turned a little bit into an
843
01:23:30,120 –> 01:23:36,440
anti-science attitude the the reasonable anti-eliteous attitude has also turned into this
844
01:23:36,440 –> 01:23:41,560
distrust of expertise and assume that just because I can do some research on YouTube I know
845
01:23:41,560 –> 01:23:46,920
just as much as the expert and we have a bigger malaise in our culture and that is of course
846
01:23:46,920 –> 01:23:51,320
many people don’t think that we’re making progress it’s just one darn thing after another
847
01:23:51,320 –> 01:23:54,760
and thinking about the future is quite of a rather pessimistic
848
01:23:55,720 –> 01:24:01,240
now non-technical people have an important role here because we’re all kind of embedded in the zeitgeist
849
01:24:01,240 –> 01:24:06,520
this idea about what the world is and where it’s going and the stories we tell each other about
850
01:24:06,520 –> 01:24:12,120
do we hope for the future of a fairing for the future what should they be doing in the future
851
01:24:12,120 –> 01:24:17,320
and generally I think we need to actually work rather hard on this project of actually reigniting
852
01:24:17,320 –> 01:24:22,760
by the idea that yeah we can actually build stuff we can understand stuff we can actually make
853
01:24:22,760 –> 01:24:28,520
the work better on a vast scale that doesn’t mean that we should always trust that people say that
854
01:24:28,520 –> 01:24:33,800
we can do it actually we should be rather good at scrutinizing their agendas and their plans
855
01:24:33,800 –> 01:24:39,080
of pointing out but quite a lot of the emperors have very little clothing on and quite a lot of
856
01:24:39,080 –> 01:24:44,840
the projects might be leaving out important stakeholders etc but it means that we’re actually
857
01:24:44,840 –> 01:24:50,920
jointly trying to work together and this is where I think scientists there is this idea that
858
01:24:50,920 –> 01:24:57,400
science needs to do more science communication we all need to reach out and talk to the stakeholders
859
01:24:57,400 –> 01:25:02,440
but in its simplest form this is of course somebody stepping down from the ivory tower and telling
860
01:25:02,440 –> 01:25:07,480
the world about some cool stuff and you’re supposed to be grateful for that doesn’t work the second
861
01:25:07,480 –> 01:25:12,680
step was oh yes people don’t know the stuff but once we know about how genetic engineering or AI
862
01:25:12,680 –> 01:25:17,560
works they’re all going to make up their minds in a useful way turns out that the deficient model
863
01:25:17,560 –> 01:25:22,040
of science communicator is also a disaster because usually people just get more polarized they have
864
01:25:22,040 –> 01:25:28,120
an opinion already and now we need to just get reinforced because you have a piece of evidence in
865
01:25:28,120 –> 01:25:34,680
favor of it now the thing that actually works better is when people get involved and I do think that
866
01:25:34,680 –> 01:25:40,840
we need to work out ways of getting involved and I don’t know necessarily the best ways of doing that
867
01:25:40,840 –> 01:25:46,440
some of it is of course just talking to people people in vibrant tower should be trying to talk
868
01:25:46,440 –> 01:25:55,400
in more to people outside but the same goes in the opposite direction too so basically what happens is
869
01:25:55,400 –> 01:26:04,120
that you need to have this interaction going both ways I’m delighted by getting emails from a boy
870
01:26:04,120 –> 01:26:09,240
in somewhere in Greece who just somehow found my email address and started asking me weird questions
871
01:26:09,240 –> 01:26:15,720
about astronomy yeah I maybe I should be doing other stuff but I give it fairly high priority because
872
01:26:15,720 –> 01:26:21,960
I think it’s really cool to just talk to somebody’s just interested in general and I think what the
873
01:26:21,960 –> 01:26:27,640
non-scientists can do here is actually helping us this have this general discussion both talking to
874
01:26:27,640 –> 01:26:33,560
the scientists but also talking to other people and helping build a culture of progress including
875
01:26:33,560 –> 01:26:38,760
defining what the HECK progress actually means to us because right now you find very few
876
01:26:38,760 –> 01:26:44,040
positive issues so even if you have a partly primary positive issue it actually gets a lot of impact
877
01:26:45,160 –> 01:26:51,480
this is partially where some religious fundamentalist groups are getting interaction simply because
878
01:26:51,480 –> 01:26:56,040
they have a kind of positive issue it’s super reactionary and limited but at least they think we
879
01:26:56,040 –> 01:27:02,600
can be going there meanwhile a lot of the Libra society doesn’t dare to propose a vision why?
880
01:27:02,600 –> 01:27:09,560
well that proposed vision might be against what somebody likes so we’re only going to talk about what
881
01:27:09,560 –> 01:27:13,800
we’re against and there is a long list of things that all reasonable people are against so we’ll
882
01:27:13,800 –> 01:27:19,560
try to do risk minimization and being inclusive but that’s not a positive issue you need to have
883
01:27:19,560 –> 01:27:26,120
something to aim for the environmentalist movement to some extent has been tracked by the success we
884
01:27:26,120 –> 01:27:31,720
know what we’re against but constructing a green society that actually people would like to be in
885
01:27:31,720 –> 01:27:36,440
is very different from a lot of the standard wishes which is a small scale society that doesn’t fit
886
01:27:36,440 –> 01:27:41,240
that many people and somehow a lot of people need to disappear from the equation to get that nice
887
01:27:41,240 –> 01:27:47,000
little small scale society scaling it up so you can have a solar punk society that has the cities with
888
01:27:47,000 –> 01:27:53,400
10 million people is outside the normal discourse but probably should be done and I think that from my
889
01:27:53,400 –> 01:27:57,720
own transhumanist perspective I would like to have people think about what would you actually want
890
01:27:57,720 –> 01:28:04,840
to enhance most of the talk about the enhancements either cool farm owned cybro stuff or it’s work but most
891
01:28:04,840 –> 01:28:09,160
of the things we actually care about happen in our own daily life there are probably aspects of our
892
01:28:09,160 –> 01:28:14,680
being that we might want to enhance and they’re very different from what makes us work better okay sorry
893
01:28:14,680 –> 01:28:21,160
getting into a ramp here but at least that managed to cover quite a lot of ground yeah I think there’s
894
01:28:21,160 –> 01:28:26,360
a lot there for that hopefully it is given that person a direction I know we’ve gone late so I’ll
895
01:28:26,360 –> 01:28:31,160
I’ll can my last question and say thank you Andrews for being on the show today sharing your
896
01:28:31,160 –> 01:28:34,760
knowledge sharing your excitement for the things you’re working on and everyone listening
897
01:28:34,760 –> 01:28:41,400
taking pictures and cooking tips and all sorts of things we can work on yeah but thanks for coming
898
01:28:41,400 –> 01:28:46,120
on the show well thank you for having me and good luck and let’s make the future bright