HighIQCommunity.Com

4,765,342 words. 5,659 posts.

Our Weird Dreams May Help Us Make Sense of Reality, AI-Inspired Theory Suggests

by | May 15, 2021 | New, News

If you are not willing to risk the usual, you will have to settle for the ordinary.”

— Jim Rohn

(Thomas Barwick/Stone/Getty Images) HUMANS Our Weird Dreams May Help Us Make Sense of Reality, AI-Inspired Theory Suggests MIKE MCRAE 15 MAY 2021

There you are, sitting front row of Miss Ryan's English class in your underwear, when in walks Chris Hemsworth holding a saxophone in one hand and a turtle in the other, asking you to play in his band.

"Why not?" you say, taking the turtle before snapping awake in a cold sweat, the darkness pressing in as you whisper to yourself, "…WTF?"

Decades – if not centuries – of psychological analysis have ventured to explain why it is our imaginations go on strange, unconstrained journeys while we sleep, with the general consensus being it has to do with processing experiences from our waking hours.

That's all well and good, but seriously, do they have to be so … well, bizarre?

Neuroscientist Erik Hoel from Tufts University has taken inspiration from the way we teach neural networks to recognize patterns, arguing the very experience of dreaming is its own purpose, and its weirdness might be a feature, not a bug.

"There's obviously an incredible number of theories of why we dream," says Hoel.

"But I wanted to bring to attention a theory of dreams that takes dreaming itself very seriously – that says the experience of dreams is why you're dreaming."

Just as we might teach a child how to read, training a program to identify patterns in a human-like manner requires repeatedly running through scenarios that have certain things – like arrangements of letters – in common.

Computing engineers have found this repetition can help a program become exceptionally good at recognizing patterns of elements within the context of its training sets, at the risk of it struggling to apply the same process when the situation gets real outside the classroom.

This problem is referred to as overfitting, and it basically amounts to an inability to generalize under situations that contain unpredictable elements. Situations like those in the real world.

Fortunately, computer scientists have some fixes. One is to throw in more scenarios, just like giving a student more and more books to read. Sooner or later, the diversity in lessons will come to reflect the complexity of everyday life.

Another method introduces twists as a feature of the pattern being learned. By augmenting the data in some way (such as by reversing a symbol), a program is forced to deal with the fact patterns aren't all going to look identical.

These fixes help improve the chances a program will cope with a wider variety of situations, but it's impossible to come up with a lesson for every single possible event life might throw its way.

Perhaps the cleverest fix is referred to as dropout. Forcing the AI to ignore – or drop out – random features of a lesson gives it the tools to cope better with scenarios that include a few potentially confusing elements.

Realizing there is a similarity between overfitting fixes and things like Chris Hemsworth offering you a turtle in your dream, Hoel's extended the fundamentals of dropout to our own brains to develop the "overfitted brain hypothesis".

"If you look at the techniques that people use in regularization of deep learning, it's often the case that those techniques bear some striking similarities to dreams," says Hoel.

Keeping in mind it's a hypothesis in want of a good testing, the fact we happen to dream of tasks we already perform repeatedly during the day could be better explained if our brains engaged in its own kind of dropout to prevent overfitting.

Hoel also cites the fact that loss of sleep – and with it, those strange dream states – still allows us to process knowledge, while making it harder to generalize what we've learned.

Although the very nature of dreaming makes any hypothesis on its purpose hard to test, experiments challenging the overfitted brain hypothesis would focus on variations in generalization rather than memorization.

If found to have merit, the hypothesis could guide the way to improving solutions to overfitting in AI, tweaking the timing and nature of dropouts or augmenting variables in ways to help the learning process generalize more efficiently.

"Life is boring sometimes," says Hoel.

"Dreams are there to keep you from becoming too fitted to the model of the world."

So take that turtle, tell Miss Ryan that you're over J.D. Salinger, and go on the road with Chris's band. Your brain will thank you for it when you wake up.

This research was published in Patterns.

View Original Article

Site VisitorsMap

stink eye, n.

stink eye, n. A look or glare that expresses anger, disapproval, disgust, etc.; a dirty look.

Recent News

Site Statistics

314 registered users
4,765,342 words
5,659 posts, 2 comments
10251 images