Search
Follow me:
Search
Follow me:

Neuroscience and bioethics

Play episode

Darshan

Hey everyone, welcome to the DarshanTalks podcast. I'm your host Darshan Kulkarni. It's my mission to help patients cluster products that depend on as you know, I'm an attorney, I'm a pharmacist, I advise companies with FDA regulated products. So if so if you have a think about drugs, wonder about medical devices, consider cannabis or obsess over pharmacy is the podcast for you. I do these video podcasts because they're a lot of fun. And I find myself learning something new each time today is gonna be one of those days. But it'd be nice to know if someone's actually listening. So if you like what you hear, please like leave a comment, please subscribe, you can always find me and reach out to me on darsan talks on twitter at DarshanTalks or just go to our website DarshanTalks calm. today's podcast in this podcast is gonna be really exciting, really interesting, really different perspective, because we're going to talk about terms that I've heard in passing, but I have no idea what they actually mean terms like New York tech. And we're also going to get into into issues that apparently have existed since at least 1988. And probably have been have existed before. But how does that get recontextualized in today's date, in today's times, if you will. So if you are in the life sciences, and you're trying to figure out if what you're doing is ethical if you're in the by the biotech or the medical device. med tech field is probably care about today's discussion, our guest today. Dr. Alan McKay, he's the Deputy Director at the Sydney Institute of criminology and adjunct at the Sydney Law School, which by itself is kind of interesting to me. What What is interesting on top of that, for me, and as I look at my picture just keeps reminding me over and over again that I look like I'm sitting in some kind of hole without ita, I can assure you I'm not. I'm in Philadelphia as we speak. But But the first question I have to ask, and is it okay, if I call you, Alan?

Allan

Sure. Sure. Please. Tom.

Darshan

Allen, could you first start with the very basic question, how did Scott land up in Sydney?

Allan

How did what? How did a Scotsman

Allan

Oh, okay. The so it's actually relates to the legal profession, really. So I'm, I initially qualified as a lawyer in Scotland, and I was guide, you know, somebody who's keen to travel. And so I managed to find a couple of legal systems that were where I could get admitted as a as a lawyer. So I worked for a while in Hong Kong. And at that time, British lawyers could automatically get admitted without any exams. And another place where it operated in a similar way was was Australia and so I, I was able to, you know, to sort of move around around the world. I guess it's partly a legacy of the, you know, the British, the British imperial system, with all with all its problems. That was that an SOS travel combined with Imperial history and Britain.

Darshan

And how did you go from being a lawyer who worked in Hong Kong, who worked in Australia, and is now a renowned by? Well, renowned ethicist?

Allan

Yeah, so I, I, I, I was always interested in the philosophical problem of free will. And I started, I started teaching law and I rolled in a PhD and my PhD was in behavioral genetics and sentencing. And I, you know, consider the possibility of somebody referring to their genetic predisposition in a plea and mitigation, if they were being sentenced. And from that, I started to look at neuroscience and the law. And there's a lot of cases where people bring up issues of brain abnormality maybe sort of some form of dementia, for example, in a sentencing matter much more than behavioral genetics. And then I started to look at neuro technologies, like brain computer interfaces, and started to I mean, you can't really look at these things without getting into the the ethics of it. Because the law runs out, there's not much law relating to these things. And so it seemed quite sort of natural that to look at ethics, newer ethics with a view to think about how the law maybe should go or and how it shouldn't go in response to some of these emerging sciences and emerging technologies.

Darshan

But that's really interesting to me. And the reason that's interesting is because for the last 10 years, I've actually taught bioethics at the University of the sciences. And it's a course on law and bioethics. And the reason it's interesting to me is, I've always sort of come at it come at the ethics part, from a legal perspective. What I'm sure you turn you talk about? Well, let me add one more piece to that, which is, I've always also additionally thought of ethics as it's not what's required, it's what you should do. But you kind of give it a different perspective, you just actually mentioned talked about it almost like, once the law sort of gives up for whatever reason, whether it's just the technology student view, or the technology is ill considered, ethics take off and sort of takes over from there.

Allan

Yeah, I think, I think so I've got a friend who's, you know, he's he appears in some hot matters in, in the the High Court in Australia, you know, the top court, and that's kind of once you get up there, the laws run out a little bit. And that's why why that's why there's a debate in the very Superior Court. And, you know, maybe he's of the view that some sort of ethical argument will will carry some sway there. But even Apart from that, you know, like, if you if you think about things like, the regulation of brain computer interfaces, Well, okay, there is. They're new. They're new, at least in a consumer sense. And there's no widespread take off of it. And so there's, there's not much in the way of cases and legislation. And so I think what we need to do is take an ethical perspective with a view to considering how the law might grapple with how it might regulate. And so we need to do some sort of ethical thinking to think about how the law should respond to some of these emerging technologies. So yeah, that's, that's my thought my thought.

Darshan

So how do you even begin processing that though, because I mean, there's so many different ethical schools of thought you can you can sort of jump into, like ends justify the means or whatever you want to go down and 1000 and all, you know, there's just 1000 different directions. But how do you choose the one that's most relevant today, and most meaningful and how you're coming up with your analysis?

Allan

Um, I think the way I've, I've, I'm proceeding myself is. So as, as I mentioned, like, with the thing that I'm paying most attention to at the moment is newer technologies, brain computer interfaces. There is a there's an emerging body of work from ethicists, lots of ethicists have written papers on, you know, issues like mental privacy, autonomy, and that kind of thing that comes from that come from newer technologies. And so really what I'm doing is looking at those those papers, because there's not so much written by lawyers, it's mostly came from ethicists. Yeah. And I'm looking at those and just considering which which I find most persuasive and which seem most useful from the perspective of shaping the criminal law. Oh, sorry. Well, that. I mean, I'm particularly interested in the criminal law, I should say, that's my you know, I teach criminal law but I'm more interested in in January and regulation bauma criminal law specialists.

Darshan

So let's explore that a little bit. What, obviously, there are a number of issues to sort of delve into here but let's start with the beginning. What are the most promising neuro tech interfaces you've seen? And and what sort of has made you go, huh, I wonder and what did you wonder?

Allan

Uh, this I mean this there's amazing possibilities with new technology. So Something like deep brain stimulation for Parkinson's, for example, you know that, you know that that's that that's something that can, you know, potentially and has improved the lives of people with Parkinson's. But the and then things I say something like a brain computer interface for a person who has some sort of severe para paralysis or even locked in syndrome, whereby a person who is entirely dependent on being cared for and fed by others might have some kind of access to communicating via email or controlling some sort of device to drain core, you know, so that these kind of therapeutic games that are our show great potential and so there's some very important reasons for pursuing euro technology because significant possibility for alleviating human suffering and incapacity but then, you know, there's there are other pursuits that you might think you might feel are a bit more morally questionable, you know, like, say, for example, some people might be concerned about the military application of brain computer interface. So for example, using a brain computer interface to possibly control us a swarm of drones on the battlefield, you know, you there's more to think of, from an ethical perspective. more troubling thoughts about that, and then consumer newertech, you know, so we're working to get direct brain to social media. And it means a whole host of a whole host of ethical issues about privacy, you know, brain reading, about our autonomy through devices that stimulate brain, you know, so there's two, you think about newertech, going in two ways. One is reading from the brain in order to control devices or something like that. The other is interacting on the brain in order to say address on kind of unwanted symptoms like Parkinson's tremors. Yeah, they're kind of two way aspects. And one of them, you know, might raise issues of mental privacy, you know, your brain data, wow, that's, that's intimate data must at least be protected. And then if you think about interventions on the brain, well, you know that that raises questions of sort of close to the freewill problem and autonomy, you know, if something is stimulating my brain and I behave differently, because of this deep brain stimulation, or whatever form of stimulation it is, then, you know, is that some sort of interference with my agency and the way I behave? So there's, so one hand is rendus. potential, and I certainly don't want to be anti newertech, because I'm not. But on the other hand, this there are some important ethical concerns that need to be thought about. So I think the I think my sort of concern is that it would be good to try and put some kind of try and shape the direction of new technology, not something I stop it, but shape direction, because, you know, there's a number of ways it could go. So so

Darshan

it's kind of interesting to me, because, without saying if you mentioned a couple of different things that really jumped to mind. Did you watch the movie Inception?

Allan

I have done Yes, yes. Yeah.

Darshan

Is it just me or does this almost sound like inception, which is in many ways, you're planting that seed, which now and inception, it's planting of a seed, you're telling me almost a, that to me, that's very deceptive, if you will, someone would know that someone wouldn't know that that seed was planted. In these cases, you're actually causing a much more macro change because we just Standard technology.

Allan

Okay, so so like, Is it ever in Section like one of the things that happened was thoughts were kind of implanted in, in, in people where there was a technology available to do that. Yeah. And so the, I've been doing a bit of work with a perfect Rafi we use from Columbia University, who's a neuroscientist at Columbia and his team have done some work on, I think it's in his team, they've done some work on mice model, you know, with mice. It looks a bit like, you can sort of insect thoughts into mice. So make them sit, make them feel as if they've seen something, and they start behaving. It seems some food. And so yeah, yeah, I think these sort of sci fi movies, some of them give us something to think about. And, you know, so locations to consider some of the ethical issues on think about, you know, what kind of legal protections might be required, given some possibilities that might emerge?

Darshan

It's funny, you say that I'm sorry, I don't want to keep going into the movies. But all I hear you talk about our thinking, I'm a huge movie buff. Yeah. I mean, this sort of what you just described, the mouse feels like he's seen the food before is what is what you said,

Allan

Oh, I think it's actually seeing it. No, you know. But they start behaving in the way as if they see they're seeing something that there's nothing there, you know, I think that's

Darshan

what do you remember that scene from the matrix where Neo gets jacked into it? He goes, I know kung fu. Yeah, it seems exactly like that concept, which is, there's a potential future where, where you don't have to get out of your chair, and you have eaten all the food you want. And you've learned jujitsu was sitting in that chair the whole time?

Allan

Yeah, I think the, the these kind of films are, are useful insofar as, you know, they, they're not, it's not a technology that sits there tomorrow, but it might be coming that people can, you know, enhance their skills without doing all the work of or enhance, enhance their knowledge, without doing all the work of enhancement, and then that raises an issue as well doesn't make because you might think, Okay, so let's say you've got access to this enhancement technology that allows you to acquire new knowledge or perhaps even one day, some new skills more easily and quickly than me. And then, you know, is the possibility of people worried about divide between the haves and the have nots, but if the if the haves can enhance their themselves, they can just increase that divide, and in a way that's maybe sort of unbridgeable and unfair. So there's, yeah, I think those those kind of movies are worth worth watching to just think about, you know, engage in kind of the thought experiments that philosophers do. But then also, you know, there's looking at the, you know, then the near term things as well, I think, you know, I think both approaches are, you know, but there's already technologies that are pretty close to being available or available that you know, also providers, you know, sites for thinking about the issues and so but I think some of the more futuristic thing can be done in conjunction with it. So I'm sort of in favor

Darshan

I see. We're both lawyers so So unfortunately, the moment you said you're in favor of something I have to go the other way now. So I hear you talking about you being in favor of it, but one of the basic things you talked about, is someone potentially putting electrodes in my head and controlling what

Allan

oh, sorry what what when i say i'm in favor i say i'm in favor of engaging in analysis of films like in the matrix with a view to deciding what to do you know, I I that's what I in favor of, you know, I think they're helpful. Somewhere in trying to think about what Well, how we should respond in terms of maybe a regulatory framework or, or other for the this technology?

Darshan

So that raises the important question, right? So, right now, probably the most severe privacy law I know of is GDPR. I mean, you've got like the payday trying to do something and you've got, like, some us laws like ccpa, or cpra, all talking about doing a GDPR like structure, but in most senses weaker than GDPR. Yeah. But GDPR doesn't even begin to conceptualize brain thoughts, though, I guess one might argue that it's, it's uniquely it's identifying. So maybe it is connected, but what what is your take? How far away? Are we from being ready for a new concept like this to emerge? And as being able to regulate it? You know?

Allan

Yeah. So so i think i think the the point you raised about the adequacy of existing legal frameworks is, is is a good one, I'm certainly not a privacy expert. But the, the, you know, one of the I'm involved in, in a group called the neuro Rights Network, which is a group of scholars who are concerned about the human rights aspects of newer technologies are not what work, you know, where we also see the benefit of them. But we're concerned about some of the human rights aspects. And one of the aspects is, is mental privacy. And, you know, even at the most basic level, you think about the International Covenant on Civil and Political Rights, you know, there's a right to a private life. But then is is, is, is that privacy regulation at this most basic level? Is that adequate to deal with mental privacy, you know, when these things were, when these kind of regimes were created, the mine was pretty much a black box, and not not accessible to others. And so I think there's a general question about whether whether it be a basic international level or in more specific domestic leverage legislation, whether the privacy regimes are adequate to deal with threats to mental privacy that no exist? And the answers for I think, is maybe not in this, I think, maybe something, you know, the, you know, the things things are different in a more fundamental way. If If companies start to have lots of brain data that can enable them more, more easily to work out what I'm thinking or so yeah, so I think, I think privacy is perhaps the most pressing area, because reading is, is going to come before writing, you know, so reading to the brain is is more advanced, you know, so being able to identify images that people are thinking of, and that kind of thing, that's, that's easier to do than to manipulate them. So that's the I think the privacy thing is probably the the most immediate challenge in respect of deciding what to do about neuro technology.

Darshan

I mean, I hear you say that, but on the other hand, I also think about this emerging area of behavioral science, which is completely dominated by a lot, a lot of tech companies use that to basically optimize behaviors or sort of push certain behavior. If that's true, I can see a scenario in which they read they being whatever this company is going to be theoretically, that will start reading our brain messages and and know when is the optimal time to push us in one direction. So maybe you don't have to force my hand to actually write out whatever you want me to, but you can nudge it every so often. Implement Yeah, correct.

Allan

Yeah. I think technology is like, you know, like, the way that maybe social media companies and others use their knowledge of the behavioral sciences to engage in things like nudging and that sort of thing. that that that that there are kind of a threat to autonomy already because people don't seem quite so free if they are reacting to, you know, things that they are not really aware of. And but then I think, Well, I think one of the concerns is let's, let's say that's already there. So there's, there's already a threat to one's mental privacy, by that can be picked up from the way that one types are the things one clicks on and that sort of thing. So there's already this threat to mental privacy there. But then, let's assume that as well as, as well as that knowledge about me, I start wearing a headset, and sometimes interacting with computers by a more direct way, then that's cut that and maybe, maybe say the same company owns the same data. So maybe it's Facebook, and they get there, they get there. So Facebook's working on brain computer interface technology, they get that going. And that's that brain data is in addition to the data that they already have, there's this cumulative effect. And it's, it just seems to increase the the threat, but I agree with you it's isn't it's not this, there's already an issue about mental privacy. And there's already an issue about autonomy as a result of the involvement of these behavioral scientists in various tech companies.

Darshan

Now speaking as bioethicists or speaking as an ethicist in your case. What is your take on those types of nudging behaviors? But let's say it helps me, it pushes me to take my medication, or push me to exercise more. Is that?

Allan

Yeah, yeah, know that these are difficult questions. Yeah, I, I, I want to reserve judgment on them, because they're not the ones I've thought about most of me, a lot of the ethical questions that I've considered are a car primarily within the context of the criminal justice system, rather than in those cases. But uh, yeah, I can see I can see the issue. I mean, like, on the one hand, the aim seems benign. You know, if you're trying to get people to take medication that they need. Yeah. On the other hand, it doesn't seem to if they don't know about, it seems a bit non consensual and maybe

Darshan

even know about it. Maybe they signed up for it. But there's a there's certain ethical angle to I am nudging you, and and you've accepted that. And what point Have you gotten more than a guinea pig? And I don't know that. Yeah.

Allan

Yeah, I think it's difficult. Yeah, I agree. I, I mean, the signing up, and the signing up isn't though, you know, like, somebody has maybe got a mental health can condition or something like that. And they'll they agree to something to alleviate their depression or whatever the issue is, yeah. But there's a question about whether the they've read the consent, and, you know, they know precisely what they're consenting to. And it does seem, as someone's fully informed the understand or the consequences of their being nudged and how the nudging works. That kind of seems okay. But but but then maybe there's something disconcerting even about that, if if a person is starting to lose their autonomy, and they're just being pulled around, not really making their own decisions. Yeah, these are difficult questions. I, is it truly your

Darshan

mens rea at that point, is the question that you're really asking, interestingly enough, even further along, if you just sort of look at the testing part of it, you know, that the company that's going to try to devise a device, such a product is going to get as my brain scans as it can, who owns that data? And that gets even more interesting as you can take, do they have to throw it away? There was a US FTC case where the company used a bunch of data and train that system. And the FTC said you shouldn't have had the data in the first place. They said, no problem. We'll throw the data away. Like No, no, you need to throw the system away.

Allan

Oh, okay. Yeah, yeah. That's interesting. That is Listen, yeah, yeah,

Darshan

that raises all kinds of questions What? But But at what point Have you gone past a certain line? I mean, you think about Nazi Germany, and you think about the time when I believe it was floatation devices, or or devise based on unethical experiments on Jewish prisoners, and one one level, you kind of go a pure bioethical level, you go, you can't let their debts and their suffering be in vain. On the other hand, it's complete, it's incredibly unethical to use it. So which one works? And in the same way, this data, which one's worse, having a brain scans or not, but yeah, I'm sorry.

Allan

Yeah, no, I think you're right. Those are difficult questions. Yeah, that's, that's a tricky, those are very tricky questions. I mean, you can sort of imagine why it might be a good idea to have a have a system that prevents somebody reaping the benefit of the fruits of their unethical behavior as a kind of deterrent effect. You know, if the, if the company just gets to keep the keep using this technology that was born of this unethical practice, they might just make a commercial judgment. Okay, well, we'll settle the privacy thing, you know, and we've got this wonderful new technology and well, we'll be happy to infringe, you know, infringe people's rights again. So you can, I can understand how a system might wish to try and prevent that kind of thinking by kind of just Garin saying, Well, you can't read the benefits from this unethical practice that you've engaged in.

Darshan

I'll give you a system that exists right now, where I can see this happen very easily in situations like we're describing, where maybe Australia and the US and the EU all say, you know what, we know better, we're going to stop you from doing this kind of stuff. But you go into countries that don't have those develop laws, or don't want to develop those laws, because they want the investment for women. Yeah. And now you've got a potentially vulnerable group signing up for things because they believe in the physicians, they believe in the money that's going to come to them. Yes. And then they say, we're going to take that data, apply it across the world, because Are we really that different deed, really, at that point. So it raises all kinds of interesting questions that sort of, really, we probably need to talk about in the next podcast we did together.

Allan

Yes, yes. Yeah. Right. Yeah. Very difficult. Yeah.

Darshan

Um, I promised that I'd hold you for me about 15 to 20 minutes, maybe a little bit longer. We're already past the 30 minute mark. So like, I like I told you, it might just be a couple of questions just to end the conversation. And thank you for the first question. Is there a question like to ask the audience based on what we discussed?

Allan

I think the I think the the interesting question for the audience is, like, there's a kind of technology, that newer technology that's coming, it's gonna, it's going to have some kind of impact that because of the amount of commercial interest and medical interest, military interest, scientific interest? And the big question is, you know, how can, how can we start to shape this technology, get enough interest in it in shaping it? So we don't just kind of wait until there are some problems with it, and then respond, then, you know, like with social media, there's, suddenly there's all these problems with it. And now we're trying to respond, how can we get on the front foot with emerging technologies and try to shape them before and try and avoid them becoming problematic? I think that's the that's the big question. And that's the one I'm interested in. Very interesting. Sorry.

Darshan

I always try to make make a first answer for the guests when they asked that question, give you my first pitch on it. So my experience is that the FDA, for example, yes. With a lot of sort of emerging therapies on a near daily basis. Yeah. Questions like the use of cannabis, the use of psychedelics, the use of stem cells, the way the FDA does that I'm not 100% sure, I'm a big fan of it. But the way the FDA does that it sort of ignores the problem and practices what they call this Question a judgment, which basically means you're totally within our jurisdiction, but we don't know which direction you're gonna go. We don't want to make 1000 laws that might contradict each other. So let's see what you land up doing. And then we'll regulate you. Yeah. I'm wanting to kind of get the point. And that sounds wager from the regulatory standpoint. Yes, I feel like those that period between the technologies invented and the FDA decides they want to start regulating it is a free for all. Yes. And some people are doing extremely ethically and other people are absolutely not. And any emerging tech will probably go through that field. And as much as I don't like it, I don't know how you do it better, unless you get an AI to look at that.

Allan

Yeah, I, I understand what you mean. I mean, like, it could go all different directions, and how can they How can they stop all the different possibilities? Um, but the Yeah, I think the concern is as the the speed of takeoff of new technologies increases, you know, like, it used to be a sort of technological kind of development cycle was a bit longer, but now they're getting quicker and quicker. Yeah. And that, you know, I maybe there's some sort of middle ground where there's a little, at least a little bit of thinking about the some of those possibilities, maybe not to the extent of actually regulating, but at least some Law Reform papers and stuff that are kind of in place where some of the thinking has has started to be done. So that, you know, once once, some issue does emerge this, the the response time is reduced a bit. But I agree with you, I understand the problem is a very personal problem, the technology will go in many different directions, and it's hard to preempt them all.

Darshan

about not only preempt them, you might end up contradicting yourself. Yeah. But yeah,

Allan

yeah, yeah, no, I, I agree. Yeah, that's a that's a that's a problem. Cuz I mean, like, my basic, my basic project is, the more the moment is trying to sort of be on the front foot. And try to anticipate some problems now to create hypothetical scenarios, which I'm increasingly doing in work, which is why I'm sympathetic to science fiction, actually. But a problem with that is that we don't really know which way it's going to go. And my hypothetical scenarios might not be the pattern of ones and a regulatory system based on that. Maybe misguided. But yeah, no, I think you're right, I think you highlight, I think it's going to be slightly scary, some way of kind of trying to balance it, you know, maybe we presented them as two opposite approaches. And maybe there's some kind of intermediate for each where something's happening. So it's not just Okay, we'll just wait and see, and we won't do anything.

Darshan

But I'll give you an example. This actually happened in the case of regulating social media by the FDA. So the FDA Wait a really long time and basically killed social media for the life sciences companies in the US. Then one day, they came out with a bunch of different guidances. It was over the course of like, two years, they came out with a bunch of different guidances. And the problem was, the technology has evolved so much, that the guidances are interesting. But there are 1000 more questions and the guidance is exactly,

Allan

yeah. Yeah. Yeah.

Darshan

So I'll give you another example that in the case of off label marketing, in the life sciences, what the FDA did was they put out a position paper, essentially saying, We've lost a bunch of different court cases saying that we have to regulate off the market in a specific way. Here are 26 ways we think we want are possible, but we don't really know. Can you tell us what you would what we should do? It was a 67 page right now. And if you ever seen I feel like European lawyers write a lot. And us lawyers tend to be a little bit more succinct about their opinions. So when I when I say 67 page paper from a US agency that tells me that they really had no idea what the hell what they're going to do with that. But to me, I, I love your problem. I just don't know how you solve it.

Allan

Yes, no, you're you're, I mean, you're raising good questions that do make me think, but it's um yeah. Yeah, I just think somehow, there's got to be a kind of running start, you know, it can't used to be a totally stationary start that happens after things have emerge. Otherwise, the problem is the horse kind of sort of bolted a bit.

Darshan

But I'll give you a bad part that comes out of it. There's something called a guidance in the US FDA. So guidances are, by definition non binding. If the FDA FDA specifically puts out this is our current thought process, our current opinion, it's non binding, both are non non EU. And yeah, and this would be what I what I think you're describing as the ideal. So here's the problem, people start treating the non binding guidance as binding. Right. And now you're stuck in a guidance that was intended to be a initial thought process. So yeah, and then never. So essentially, the way the FDA does is they create a draft guidance, and then they create a finalized guidance. Yeah. And, and they want opinions between those two. So what's happened is because the FDA knows that guidances get treated effectively, like regulations and laws. Yeah, they kind of come out and don't actually necessarily update for graph status. So they're literally draft guidances from 1996. that have not been updated.

Allan

I see. Yeah, yeah. worries.

Darshan

Exactly. That, which is we lined up in a scenario where agency comes out and says, This is what our current thought process on how we can regulate brain neurotech. Yeah, they never updated now you're stuck. Yeah. Something that wasn't even thought off properly. At that time was just an initial thought.

Allan

Yes. Yeah. Yeah, that's a that seems suboptimal? Yeah, it does sound like some modified approach. I mean, it seems like the problem is that this thing is not dynamic, isn't that but but then, to make it dynamic, be making it presumably this thing happens in lots of different areas. And so you'd have to expand the agency to keep keep all these things live and keep being updated. So yeah, you're, you're making some very good points that that are worth thinking about.

Darshan

Hopefully, it's one answer. Maybe we'll get some more than that. answer that as well. So yeah, more questions for you. Yeah, seems to be popular section. So, um, it's the rapid fire questions, if you will. So this, this question is the question of the month, what did you learn this month?

Allan

So I've been reading a book called The whale in the reactor, search for limits in age of high technology by Langdon winner, which I've, it's a book that was written in 1988 that I've just recently come across. But I've learned it's quite a sort of classic book in philosophy of technology, but I hadn't come across it. But it's it's very, it's very, it's very interesting it. I just started reading it, but the thus far what one of the author's concerns is that when we engage in thought about technology, to the extent we do think about, sometimes what happens is people are just going to a kind of cost benefit analysis. And that sort of misses that that kind of thinking, according to the author misses out something that's important. And the important is, is just how's it how's it going to change us? How's it going to change society? So, you know, the, the, the important thing, of course, is, it's always important to consider the harms and benefits of new technology, but maybe a bit more thinking about how we'll change the way we live our way of living. And maybe, maybe that has been sort of taken on a bit since Langdon winner originally wrote the book in 1980. And you know, you can see it and discussions of AI a bit more How will this change society but Yeah, I've been very much enjoying reading this book anyhow. And I would, it's just been reissued. There's a 2020 version, and I would recommend it to people who are interested in that. Yeah.

Darshan

And I know the week is just starting, so we'll answer for last week. But what was the most memorable? What was most memorable thing that happened last week?

Allan

Ah, the most memorable thing? Oh, wow. Um, sadly, last week wasn't very memorable, because I was spent quite a long time marking. And so things go into a little bit of a blur when, when one's brain gets hijacked by these, these lines and made some art. So. But yeah,

Darshan

hopefully next time, we'll have something more memorable for that for that question that that question comes up again.

Allan

I hope so. I hope sorry, maybe this week will be a good one.

Darshan

Well, this was very, very cool. I, um, do you mind if I do a quick summary of what we talked about? Sure. Sure. So during this conversation, we actually started off with talking about your experience with Actually, I'm gonna I'm gonna start with even before we started going live, but he talked about Langdon winners book actually, that's what you started with. You talked about how AI and neuro Tech's gonna change the world. He talked a little bit about your background, starting with Hong Kong, going into Australia, we talked about your PhD in behavioral genetics and sentencing and got into the idea of free will. And then we started talking about neuroscience, we start talking about dimension neurotech. Although the brain interface, which I have to imagine is something you're sort of dealing with very, very often because it came up in your conversations a few times. Yeah, I did ask you a little bit about the brain interface. Because Ilan Musk has been talking about that he wants to go with that soon. But I feel like that's a conversation for next time. We talked about the concept of mental privacy and how that ties into the movie inception, we talked a little bit about autonomy, and and how behavioral science and in combination with the lack of mental privacy laws will end up affecting autonomy, or could land of affective autonomy, your specific interests, obviously, being in the context of criminal law, we then land up talking about how it could be extremely valuable, obviously, in the case of the brain stimulation, the Parkinson's or the brain computer interface was something like locked in syndrome. On the other hand, there were concerns, military applications to control drones on the battlefield, which I think is kind of interesting, because there was recently I want to say within the last week, a case where we had the first autonomous drone kill someone. Yeah. very upset about it. But is it any better if it was a person behind that drone? So I don't know if a brain interface necessarily makes sense, but that me better or worse? Because it kills still kill? Yeah. And the part that made it interesting, your example is obviously it was one too many. So it was one person managing multiple drones. So does that become a ethical quandary? When there's so much of a difference and impact, so that's what I at least took away with with your example. And then he talked about the lack of privacy in the case of brain to social media, like it's bad enough, where we try to take pictures of ourselves on Instagram, and every food we eat has to be recorded. What happens if the brain just starts giving out snapshots? And there was a TV show called Black Mirror? Yeah, exactly that. Yeah. They recorded your your day. And then we talked about the concept of the haves and have nots the the neuro enhanced, if you will, and does that create a unfair advantage, that that no one else will be able to meet? I mean, in many ways it creates a version of the matrix that you've seen all through the matrix is there are people in the matrix where some never got connected to the interface while others did. But yeah, there's got to enjoy many more perspectives than the ones who never did so what does that mean? I did have a question for you, which was he talked about the International Covenant on something around privacy and didn't know which one that was. Never heard that one before. Oh, I

Allan

see after the Universal Declaration of Human Rights. Well, quite a long time after actually. The treaty is one of those treaties that emerged from it was the International Covenant on Civil and Political Rights. And in that treaty, there's a right to private life. And there's I think my point was the question debates about whether those kind of protections under international law properly embrace concepts of mental privacy and whether the debate about whether the international system as well as domestic system needs to be sort of upgraded with new rights including mental privacy as Chile is in fact doing at the moment Chile, is that right? I did not know that. Yeah, Chile's Chile's altering their constitution to include rights to mental integrity and mental privacy. Right now,

Darshan

if we go to the constitution for that, why can't you just do that with laws?

Allan

They're doing it with both actually. But the, the, as I understand it, the reason why this constitutional changes, I mean that so the Chilean constitution was created during the time of Augusto Pinochet. And so there's all this tainted with human rights abuses and there's a big culture of interest in human rights because of I think, and scholarship actually, because of this, this history and right now there's a sort of whole scale rewriting of the Chilean constitution which is provided an opportunity to which wouldn't otherwise be there to include some new rights but there's also a bit of ordinary law called the neuro protection bill which is is also on its way through so chewy is kind of a world leader in this I think Spain might be sort of following this but the you know, there's that there's a movement to addressing your rights and as mentioned I'm involved in this organization called the new Rights Network and we've been in we've been looking at that so rafeal use for example who is who I mentioned a professor from Columbia he's he's appeared in front of Chile and sent out I think and we all participants in our conference in organized by the Chilean senate they are kind of on their on their world leaders in respect of your rights.

Darshan

Very, very cool, but I think that was my summary don't miss anything by the way. Um,

Allan

I didn't notice anything you miss. No, no, no.

Darshan

This was this was amazing. How can people reach you if they have questions?

Allan

I can just google my my name you know, you get the first one is my website. So you'll, if you Google my name, then you get the first hit is my website is Isabella Ra.

Darshan

And that's

Allan

a double L and n WC. AY, so it's, it's often misspelled. So, yeah,

Darshan

perfect. And if you liked this podcast, please leave a comment. Please subscribe. You can find me on darsan talks on Twitter or go to our website DarshanTalks Comm. Dr. McKay, this was wonderful. Thank you so much.

Allan

Thanks very much for inviting me. I enjoyed it, and I got something from it as well from your interesting questions. Thank you.

Darshan

I appreciate it.

Allan

This is the DarshanTalks podcast, regulatory guy, irregular podcast with hosts Dr. Shaun Kulkarni. You can find the show on twitter at DarshanTalks or the show's website at DarshanTalks.com

More from this show

Recent posts

Newsletter

Make sure to subscribe to our newsletter and be the first to know the news.