Skip to playerSkip to main content
  • 12 hours ago
MindPortal aims to revolutionize communication by using AI models to convert brain signals into text, potentially paving the way for telepathy and seamless interaction using ChatGPT. Imagine thought-to-text communication! However, as our AI Editor Ryan Morrison discovered, this AI mind reader technology still has a long journey ahead. Find out why the experiment didn’t quite hit the mark and what challenges lie ahead for this ambitious startup.

Category

🤖
Tech
Transcript
00:00Have you ever thought tech is listening to your thoughts? Well, that's exactly what I'm here at
00:04Startup Mind Portal to have done. They've created a new technology that can read your mind and send
00:10it to ChatGPT so you can have a conversation without having to type or speak. Wish me luck.
00:19Hi. Hi Ryan, good to see you. Hi, good to see you. So tell me a little bit about Mind Portal.
00:24What is it? How did the idea come about? Mind Portal is a human AI interaction company
00:30and it was founded after a year of me taking psychedelic substances. During those abstract
00:38thinking sessions, a lot of visions came out. So during those sessions, what I set as kind of
00:43the thinking goal of those sessions is the future of humanity, how to make an impact, etc. But really
00:50the premise of Mind Portal is we want to explore the nature of human AI interaction.
00:54Okay. How can you make a future that symbiotic, which biologically is, win-win?
00:59We're going to look at a demo today where we can chat to ChatGPT using our brain.
01:03Yes. So this is the world's first demonstration, like never before has this been done. Can you bridge
01:09the gap between the human and their thoughts, which is the most intimate form of communication,
01:14what you think, what you feel, and an AI agent that can understand and respond to those thoughts.
01:19All right, should we try it? Yes, let's do it. Tell me what we're looking at now. So you've got
01:25something called an EFNIR system, which is able to record brain data, optical brain data, based on
01:31the blood flow happening in Ed's brain. When Ed imagines language that obviously activates different
01:37parts of the brain, it's that activation that's being picked up in real time. It's also the first
01:43demonstration of communicating with ChatGPT as well. Using just your mind. Exactly. You'll think a
01:50thought, the sentence is classified, but that classified sentence then becomes an input into
01:55ChatGPT to then respond to you. Okay. We're going to have a look at it. What's the goal? What do you
02:00want to do with this? Where do you see it going? This system, if you were to spend time building it
02:06towards commercialization, you could scale it in a multitude different ways. Okay. So number one is you
02:11could scale the number of sentences and then of course you scale the accuracy. What are we doing
02:16first? So if you'd like to pick a sentence. All right, well, let's go for Venus. I'm a space buff.
02:22Yep. I'm going to imagine this. Well, I'll read it out loud then, because if you read it out loud,
02:26then it raises the question of, is it just taking it from your voice? So you're going to think in your
02:31mind, if I were on Venus, I would be in a world of extremes. The pressure would feel like being a kilometer
02:37underwater, crushing you from all sides. The air is a corrosive nightmare capable of
02:41dissolving metal and forget about rain. It's sulfuric. So you're thinking that.
02:46Yep. That's a long sentence to imagine. It is. So you can't just imagine the visual
02:51of being on Venus. You've got to imagine the actual words in that sentence. Currently,
02:55yeah, we're trying to extract the semantics from that. All right, well, let's go. Let's see how that
03:00happens. You're going to think those words and hopefully the chat GPT will respond.
03:06So you've sent that off as the prompt to your decoder.
03:13So now the decoder basically is taking the brain data and trying to identify which of the sentences
03:18he was thinking. Okay. And then that's over there is outputting the sentence. So you got it wrong.
03:26So this is the restaurant sentence. Okay. So it was sentence number two, I think.
03:30And this is the brain data that went, that as he was thinking that sentence went into the system.
03:35Yeah, we can do another one and we can show you basically how this progresses as it's imagining.
03:40What's the one that works more often than not? If we're sticking to probability.
03:46Let's try it. Let's try again. Let's see. Let's see if it works.
03:54And right. Okay. So this time he's had a chat with mum on the phone,
03:58but it is showing that you can have this conversation. It's just a case of scale.
04:03Correct. So with enough data, with a larger model, we hypothesize as we've seen in AI or with any
04:11breakthroughs in their infancy, the accuracy would improve. And then you'd start to increasingly have
04:17the conversation you want without the incorrect inputs. In essence, what's happening each time,
04:21there's an incorrect input and then there's correct ones. And correct ones are happening enough times
04:25for us to know this works. Okay. Scaling data and scaling the model is let's get it to work more and
04:31more times with reduced error in essence. Should we try it one more time?
04:37At least we've seen them all now. No, it should be stressed. This is very early stage. We're
04:42looking at a sort of research preview of a technology that with enough scale will improve
04:48potentially exponentially. Where do we go? What am I going to be able to go into a supermarket
04:55and look at a product and say in my head to my AI, can I eat this and have the AI pick up my thoughts
05:04and respond? Can we get to that point? Yeah, I think we can. And the reason being is we've seen
05:10this again and again in AI, as you increase the amount of data, as you increase the model size, you get
05:17better performance. So the time constraint, honestly, there is how many people and financial constraint
05:23is around how many people can you collect brain data from? Because unlike going online or using
05:29written sources, which are easy sources of data to acquire, this is a bit more of a tricky in the
05:34current paradigm. I've done some back of the napkin calculations, just for fun. And it's not as
05:39expensive as you might think. All right. And it doesn't take as long as you might think. I think for,
05:44you know, under $50 million, which is, you know, in the venture capital world or in the world,
05:49this is not a change. Yeah, exactly. You could have in six months time operating 100, 200 different
05:55headgear caps. Yeah. People coming in in batches and have thousands of people going through. Now,
06:01of course, my cursory kind of calculations assumed a threshold you'd need to reach because we don't
06:07know how much data will confer. So let's assume you reach that threshold, you get the funding.
06:13And in a year's time, you've done all the data, you've crunched all the data, your model's working.
06:17I can go out and buy a baseball cap and talk to my AI without having to speak out loud. Can I talk
06:22to someone else wearing the same baseball cap? And we were going to have full telepathy
06:26with the AI as a translator. So the answer to that is, if you scale the data and if you scale the model,
06:31and if you integrate it into a cap wearable, then yes, theoretically, it should work. It should work.
06:36There's no reason why that shouldn't work. So we can have telepathy.
06:39You could have telepathy. There's nothing. And that's what we were setting out to prove. So for
06:43example, I could wear headgear, think of a sentence such as, how are you today? Yeah.
06:50That could be then sent through an AI model that takes the text and translates it into a voice
06:55Yeah. And puts it into your ear as an AirPod. Okay. Through an AirPod. And you can hear me
07:00And you can just respond. They can respond with their thoughts. And you can respond back to my AirPod.
07:02So in theory, we're having a telepathic conversation. Neither of us are speaking.
07:06Yeah.
07:06But we're using pre-trained sentences to have a back and forth dialogue, which we're both hearing.
07:11And now you've got AI models that can take a bit of your own voice.
07:15And it can sound like you. And put it into the sound.
07:17So it would sound like Ryan when I'm hearing. It would sound like me when I'm talking to you.
07:20That raises an interesting point because that would potentially give the voiceless a voice.
07:24Because you could use a text-to-speech engine based on that.
07:30And their thoughts could go directly to the voice engine rather than having to type it out.
07:34Exactly.
07:34Well, that was a lot of fun. It didn't work as expected, but it's an early research preview.
07:39This isn't a product they're going to be putting on the market tomorrow.
07:42However, it did give us a really interesting insight on what we might be using
07:46and how we might be interacting with AI and each other in the next few years.
07:51And I really hope it works because I do not want to be standing in the supermarket
07:57talking to myself when I'm just having a conversation with my AI.
08:01Fingers crossed.
08:02If you want to find out more about what's going on in the world of AI,
08:05find me on Tom's Guide or follow our socials at Tom's Guide.
Be the first to comment
Add your comment

Recommended