top of page

ND ≈ ML: The Thinking Behind the Thinkin

  • Writer: Paul Murphy
    Paul Murphy
  • Apr 28
  • 3 min read

Exploring the unexpected overlap between neurodivergent minds and machine learning models 

I'm not an expert in machine learning. But I am neurodivergent, and as it turns out, that makes learning about machine learning unwittingly intimate. It started entirely innocently: late-night web-surfing, a passing fancy, a random click on the title of a tutorial video. A couple of hours and tabs later, I was plunged into algorithms, learning models, and flowcharts that look like they're made of electrical wiring but supposedly chart what a neural network does. It was interesting, disorganised, and, to my surprise, familiar. Of course, human thought is

infinitely more complicated than any machine simulation. However, the imperfect analogues

provide a surprisingly evocative way of considering how individuals experience the world differently. What was surprising was how complicated these models were and how much they reminded me of people.

 

Neurodivergent beings, particularly. How they learn. How they process. How they transform

or how they flat-out refuse to convert, and the more I read, the more it began to feel like these systems developed by humans quietly held up a mirror to the humans we so frequently misread.

 

Instance-based learners, for example, use models like k-Nearest Neighbours. These systems do not generalise. They memorise. They compare new input to the best in their memory, calculating

similarity with meticulous attention. No shortcutting. No filtering. Pure memory. It reminded me of the sort of neurodivergent brain that remembers exactly where you last saw them, what they

were wearing, and the joke that you laughed too hard at in 2018. Were k-NN a human, they would say, "We have met before," and they'd be right, to the minute, place, and that you had spinach on your teeth.


Then, naturally, there's Naive Bayes, a probability model. It forecasts based on previous events, working out the probabilities like a little bookie with a scrapbook. It doesn't merely

conclude; it weighs up all the possibilities, weighs risks, and then acts reluctantly. The mind drifts over an announcement, reads it five times, asks itself if "Sounds good! " is too hyped, and chooses prudently or otherwise to leave it on read. If this model had a relationship, it would go to speed dating with a spreadsheet. "Based on your earlier answer, there is a 78 per cent chance I will annihilate this." 

 

Decision trees, on the other hand, are no-nonsense structures. They compartmentalize

decisions into clear, rule-based pipelines. If this, then that. Logical, orderly, utterly intolerant of

 ambiguity. Thinking thrives on rituals, timetables, and knowing precisely the right time the train leaves, not roughly. These minds bring luggage for all possible weather events in July because

"you never know." It is not inflexible. It is constructing a framework to endure uncertainty. 

 

And then there are neural networks. These are rich, layered systems with interwoven logic and higher-order learning patterns. On the surface, they seem impenetrable, almost mystical. You can't always be sure what they're doing or why, but given sufficient time and correct input, they make sense in ways other models can't. They are not necessarily interpretable or fast, but they are deep and expose abstract patterns that any less complex model cannot. I pictured the kind of person who needs space to compute and time to digest and isn't so much of a small-talk type of person. But when they actually join the dots together, it is something nobody else could possibly foresee. 

 

Association rule learners give you something entirely different. These models are masters

at finding unusual patterns between data points. They're the ones who pick up on the fact that people who buy pasta also buy dog shampoo. It may not be obvious, but it does occur. It's the kind of neurodivergent brain that brings up unrelated topics, connects cereal, socks, and your ex's iTunes playlist, and somehow knits them together into an insight. Thank goodness, these minds can't turn off pattern recognition because they're often the first to spot what everybody else misses.

 

And then there's dimensionality reduction. Some algorithms, like Principal Component Analysis,  take massive, unmanageable data sets and distil them to what's relevant, pruning

away from the background noise. They don't discard information mindlessly; they perform a linear transformation, collapsing complexity into the most meaningful directions, reducing overwhelm without losing substance. They cut it to survive. And for some neurodivergent people, it is not an option but a necessity. Life offers too much input and too many layers of noise. The only way is to decide what matters and let the rest disappear. PCA completes a hundred problems and quietly says, "Let us just do these three." That is not laziness, that is triage.


Last of all, of course, comes deep learning. These are tough, complex, and primarily inscrutable models. They do not follow a formula; they invent their own. They come to understand things humans don't, even after some time. They warm up slowly and can be misjudged, but have sneaky brilliance. They do not yell. They exist. When they are correct, they revolutionise

everything.


So why are we doing all this? In trying to create intelligent systems, we have, through chance, created systems that reflect something very human: diversity of thought,  processing, and the fact that one system does not apply to all. We accept that no single machine-learning model is suitable for all problems. We design, support, and customise them for different use cases. We know their limitations and their strengths.


Why do we not do this to people? Neurodivergence is not a bug. It is an alternative processing,

storage, and engagement with the world. Some of us process it all. Some of us need rules. Some of us break through the noise, and some build new frameworks from it. These are not things to fix. They are systems to understand.


Learning about machine learning not only taught me how algorithms are designed. It gave me a different way to look at my brain. Not as flawed. Not as complex. Just different. Maybe it is not the best. It's not the neatest. But it has depth, memory, and the occasional spout of strange brilliance. 

 

And maybe, if we can luxuriate in the complexity of our machines, then we might learn to do the same with our minds. Of course, neurodivergence is not the same for everyone. What resonates for one brain might feel alien to another, but I felt a strange familiarity within these analogues.

 
 
 

Comentarios


Welcome to BrainBlogger.co.uk, your go-to destination for raw and honest insights into the world of ADHD and neurodivergence. As an avid blog writer sharing real-life experiences, I aim to raise awareness, provide reassurance, and offer support to individuals navigating the unique challenges of neurodiversity. Join me on this journey as we explore the unfiltered narratives of ADHD, fostering a community that understands, empathizes, and uplifts.

Thanks for submitting!

  • Black Facebook Icon
  • Black Instagram Icon
  • Black Pinterest Icon
  • Black Twitter Icon

© 2035 by Bump & Beyond. Powered and secured by Wix

bottom of page