top of page

Bounded Rationality Meets Deep Learning: Reconsidering Herbert Simon’s Legacy
Jay E. Ryu: jrdatahub@gmail.com

When scholars and practitioners discuss decision-making in large organizational settings, Herbert Simon's concept of bounded rationality is frequently cited. Decision-makers, constrained by limited information-processing capacity and institutional frictions, pursue boundedly rational choices. Under uncertainty, rather than evaluating all available alternatives, they rely on simple heuristics, overlooking complex interrelationships within the decision environment. This behavior arises from constraints on available information and cognitive limitations. Simon framed individuals as adaptive systems—information-processing agents whose decisions are shaped by their goals, values, and memories. As such, individuals can process only limited information at a time, drawing upon specific memories when confronted with stimuli from their environment.
 
However, with the advent of artificial intelligence (AI), machine learning, and deep learning, the cognitive capacity of decision-makers has expanded beyond what was once thought impossible. At first glance, these advancements seem to contradict Simon's insights. Yet, a closer review reveals that Simon's information-processing theory is thriving in the age of deep learning, particularly in architectures like Recurrent Neural Networks (RNNs).
 
Revisiting Simon's Decision-Making Process
 
Simon identified three stages in the decision-making process: stimuli, memory, and problem-solving.
 
1. Stimuli: In this stage, decision-makers exercise selective attention due to cognitive constraints, focusing on environmental cues shaped by past experiences. These cues determine what information is perceived and which memories are triggered. Recent theories, such as Disproportionate Information Processing and Punctuated Equilibrium Theory, support Simon's idea that decision-makers process environmental cues selectively, rather than passively.
 
2. Memory: Memory, the second key element, forms the basis for determining values, goals, alternatives, and expectations. It is divided into short-term and long-term memory. Short-term memory evaluates the relevance of incoming information, but its limited capacity forces decision-makers to store relevant data in long-term memory. Long-term memory, which forms rules for categorizing stimuli, is slow to develop, constraining decision-making.
 
3. Problem-Solving: When faced with these limitations, decision-makers define a problem space—a subjective, simplified representation of the environment. They extract information to guide a selective search for solutions, evaluating alternatives sequentially. Once they find an option that minimally meets their criteria, it becomes their solution.
 
Simon’s Insights in Deep Learning
 
Of these components, Simon’s first stage—stimuli—can be made compatible with deep learning by extending machine capacity. In this context, selective attention could be reinterpreted as open attention depending on machine capability. What is particularly fascinating is how Simon’s insights on memory are mirrored in deep learning models, especially RNNs through Long Short-Term Memory (LSTM) modules. Though modern architectures like Transformers rely on self-attention and parallel data processing, Simon’s conceptualization of memory interplay continues to thrive.
 
Lastly, Simon’s idea of a problem space offers valuable lessons for deep learning in decision-making. Neural networks excel at simplifying complex decision environments by extracting patterns from vast amounts of input data—much like how decision-makers simplify their problem space.
 
Walking Beyond Bounded Rationality
 
If we maximize the potential of deep learning, we may well surpass bounded rationality and move toward intended rationality, where humans, aided by machines, are empowered to achieve goals that were previously out of reach.
 
 
Bibliography
 

Bendor, J. (2015). Incrementalism: Dead yet flourishing. Public Administration Review, 75(2). https://doi.org/10.1111/puar.12333

 

Fry, B. R., & Raadschelders, J. C. N. (2017). Mastering Public Administration: From Max Weber to Dwight Waldo. https://doi.org/10.4135/9781506374529

 

Jones, B. D., & Baumgartner, F. R. (2005). The Politics of Attention: How Government Prioritizes Problems. The University of Chicago Press.

 

Ryu, J. E. (2017). Bounded Bureaucracy and the Budgetary Process in the UNITED STATES. https://doi.org/10.4324/9781315082042

 

Stevens, E., Antiga, L., & Viehmann, T. (2020). Deep Learning with PyTorch. Manning Publications Co.

 

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 2017-December.

bottom of page