Combining the best attributes of computers with the best attributes of the human brain can form a “dream team” capable of amazing feats that neither one could handle alone.
Your brain is far better at pattern recognition than any computer, but nearly any computer is better than your brain at certain tasks that require speed and brute-force analysis. What if they could work together as a team?
That’s exactly what DARPA wondered in 2008, when the Pentagon’s research arm launched the Cognitive Technology Threat Warning System (CT2WS).
Basically, a soldier wears an electroencephalogram (EEG) cap that monitors his brain signals as he watches the feed from a 120-megapixel, tripod-mounted, electro-optical video camera with a 120-degree field of view. (Translation: incredible detail in a huge range of vision.)
HRL Laboratories’ principal investigator, Deepak Khosla, explains that before the soldier sees the video, an algorithm mimics how the human brain works and identifies certain potential hot spots. The software marks on the video footage potential hot spots.
In a battle situation, there could be many potential threats. That’s where the EEG cap comes in. The system monitors the soldier’s brain as he watches the video, looking for a specific “P300″ response, which I’ve previously described as sort of an “Aha!” moment of recognition.
Then, says Khosla, “Out of 100 hot spots in perhaps 10 seconds, those high scoring regions can be presented back to the solder in two or three seconds. Then he can decide what to do.”
In other words, the system helps the soldier scan huge amounts of hard-to-interpret video footage faster and with much higher accuracy.
Here’s how DARPA’s press release describes what happens:
Even though a person may not be consciously aware of movement or of unexpected appearance, the brain detects it and triggers the P-300 brainwave, a brain signal that is thought to be involved in stimulus evaluation or categorization.
By improving the sensors that capture imagery and filtering results, a human user who is wearing an EEG cap can then rapidly view the filtered image set and let the brain’s natural threat-detection ability work. Users are shown approximately ten images per second, on average. Despite that quick sequence, brain signals indicate to the computer which images were significant.
Chris Berka, CEO and co-founder of Advanced Brain Monitoring, which also worked on the project, explains that since people don’t always consciously recognize their own P300 response, the system automatically investigates responses that a person might not.
Says Berka of ABM’s efforts in general, “A lot of the work we do is trying to optimize the ways that humans interact with machines. There are certain skills and abilities that the human brain is uniquely suited for, one of which is pattern recognition.”
The combined system seems to work better than either a human or a computerized system working alone. DARPA reports:
The cognitive algorithms can highlight many events that would otherwise be considered irrelevant but are actually indications of threats or targets, such as a bird flying by or a branch’s swaying. In testing of the full CT2WS kit, absent radar, the sensor and cognitive algorithms returned 810 false alarms per hour. When a human wearing the EEG cap was introduced, the number of false alarms dropped to only five per hour, out of a total of 2,304 target events per hour, and a 91 percent successful target recognition rate.
This brain-computer Dream Team approach has potential for numerous applications in our work and personal lives. A person sorting through thousands of images or documents or faces could simply pay attention, and when their brain registers recognition, the computer could take over. There would be no need for the person to write down a file name, move the item to a new folder, or take any other action.
Khosla says, “Cognitive algorithms and EEG decoding can be used to detect interesting or anomalous objects or patterns or events in video or other data. In medical applications, they could help diagnose anomalies in radiology images. In video gaming, you could use EEG for simple control. In automotive applications, you could detect threats and control buttons inside a car or apply the brakes.”
Berka adds that one of the challenges is to find the ideal balance between human and computer participation.
“More automation is not always the best thing,” she cautions. “We’ve done some studies with cars that focus on activities such as parallel parking or maintaining how far you are from the vehicle in front of you. It turns out that if you add too much automation, the driver’s attention drifts and they go into a pre-sleep brain pattern. That’s not good, because if you have an emergency and you have to react, you are pretty far away from being alert.”
One giant obstacle to bringing EEG caps, or Brain-Computer Interfaces, out of the laboratory has been the need to use gel in the user’s hair, which for most people is pretty gross. Berka’s firm is now working with Quantum Applied Science & Research and the University of California San Diego on a new dry gel substance that she reports “forms the same conductive bond, but that doesn’t leave any residue in the user’s hair.”
After being successfully field-tested, CT2WS recently moved to Phase Three, where goals include making the system easier to use and even more accurate.
Bruce Kasanoff is a speaker, author and innovation strategist who tracks sensor-driven innovation at Sense of the Future. Kasanoff and co-author Michael Hinshaw teamed up to explore more of the opportunities unearthed by disruptive forces in Smart Customers, Stupid Companies.
Source : digitaltrends[dot]com
No comments:
Post a Comment