I’ve copied in an article on wired.com about a recent development in neuroscience below. It seems that we now have the technology at hand to reconstruct memories from within someone’s consciousness. I’m not worried about people using the tech to implant something into the brain – I think that’s way off… impossible even – but the thought of someone ‘seeing’ what you’re thinking is quite scary and a bit Outer Limits-ish.
Obviously if you’ve committed a murder, it can be proven or dis-proven by delving into your mind. This is a bit like the tech shown in Minority Report. You might be thinking “Minority Report was about pre-cognition” but remember that they had the hybrids in a pool tied up to the computer. They were able to ‘see’ what the hybrids saw in their minds. Tom Cruise assembled the snapshots of memory on the big glass screen to determine how the murder had happened, and who had done it – – or, more accurately, how it was going to happen, and who was going to do it.
When Spielberg did Minority Report, he asked some big thinkers to give him an idea of what the near-future might look like in terms of available (realistic) technology, in terms of computers, holographic devices and transportation.
Obviously given this development, they weren’t far off at all.
I’m not sure I’d like someone poking around reconstructing my dreams though.
The article from wired.com:
Neuroscientists at the University of California, Berkeley, have figured out a way of recreating visual activity taking place in the brain and reconstructing it using YouTube clips.
The team used functional Magnetic Resonance Imaging (fMRI) and computational models to decode and reconstruct visual experiences in the minds of test subjects. So far, it’s only been used to reconstruct movie trailers, but it could, it is hoped, eventually yield equipment to reconstruct dreams on a computer screen.
The participants, who were members of the research team (as they had to stay still inside the scanner for hours at a time), watched two sets of movie trailers while the fMRI machine measured blood flow in their visual cortex.
Those measurements were used to come up with a computer model of how the visual cortex in each subject reacted to different types of image. “We built a model…that describes how shape and motion information in the movie is mapped into brain activity,” said Shinji Nishimoto, lead author of the study.
After associating the brain activity with what was happening on the screen in the first set of trailers, the second set of clips was then used to test the theory. It was asked to predict the brain activity that would be generated based on the visual patterns on-screen. To give it some ammunition for that task, it was fed 18 million seconds of random YouTube videos.
Then, the 100 YouTube clips that were found to be most similar to the clip (embedded below) were merged together, forming a blurry but reasonably accurate representation of what was going on on-screen. You can see that process in action in the video above. “We need to know how the brain works in naturalistic conditions,” said Nishimoto. “For that, we need to first understand how the brain works while we are watching movies.”
The technology could be used to try and find out what’s going on in the minds of people who can’t (or, more sinisterly, won’t) communicate verbally. However, Nishimoto admits that we’re still “decades” from scanning other people’s thoughts and intentions. Oh, and Inception fans will be disappointed too — the authors say: “There is no known technology that could remotely send signals to the brain in a way that would be organized enough to elicit a meaningful visual image or thought.”