Decode logo history8/11/2023 On a screen filled with TV static, a bunch of symbols forms the word "DECODE". The " DECODE" text's light swirls and the "Entertainment" is shooting out light rays. " DECODE" is in a royal blue American Typewriter font and the "Entertainment" font is a font similar to Comic Sans. "This discovery could be a first step toward compromising that freedom in the future.On a white background, we see the words " DECODE Entertainment Inc." in blue. "Our mind has so far been the guardian of our privacy," said bioethicist Rodriguez-Arias Vailhen. They also called for regulations to protect mental privacy. Next, the team hopes to speed up the process so that they can decode the brain scans in real time. All these tactics "sabotaged" the decoder, the researchers said. While listening to one of the podcasts, the users were told to count by sevens, name and imagine animals or tell a different story in their mind. The three participants were also able to easily foil the decoder. They ran tests showing that the decoder did not work on a person if it had not already been trained on their own particular brain activity. The researchers anticipated such concerns. This brings us closer to a future in which machines are "able to read minds and transcribe thought," he said, warning this could possibly take place against people's will, such as when they are sleeping. "So we can see how the idea evolves, even though the exact words get lost." Ethical warningĭavid Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University not involved in the research, said it went beyond what had been achieved by previous brain-computer interfaces. This showed that "we are decoding something that is deeper than language, then converting it into language," Huth said.īecause fMRI scanning is too slow to capture individual words, it collects a "mishmash, an agglomeration of information over a few seconds," Huth said. The decoder struggled with personal pronouns such as "I" or "she," the researchers admitted.īut even when the participants thought up their own stories – or viewed silent movies – the decoder was still able to grasp the "gist," they said. The study's first author Jerry Tang said the decoder could "recover the gist of what the user was hearing."įor example, when the participant heard the phrase "I don't have my driver's license yet," the model came back with "she has not even started to learn to drive yet." To test the model's accuracy, each participant then listened to a new story in the fMRI machine. The model was trained to predict how each person's brain would respond to perceived speech, then narrow down the options until it found the closest response. They fed this data into a neural network language model that uses GPT-1, the predecessor of the AI technology later deployed in the hugely popular ChatGPT. This allowed the researchers to map out how words, phrases and meanings prompted responses in the regions of the brain known to process language. For the study, three people spent a total of 16 hours inside an fMRI machine listening to spoken narrative stories, mostly podcasts such as the New York Times' "Modern Love."
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |