Laws, Theories, Principles and Patterns that developers will find useful.
- 📖 Check out my new book Effective Shell on Amazon
- 🌍 Try hacker-laws.com
- 🧠 Experiment with my new project Terminal AI
- ☕️ Like this project? Consider buying me a coffee with a one-off donation
- 🎧 Listen to the podcast The Changelog – Laws for Hackers to Live By
- 📖 Download the PDF eBook
There are lots of laws which people discuss when talking about development. This repository is a reference and overview of some of the most common ones. Please share and submit PRs!
❗: This repo contains an explanation of some laws, principles and patterns, but does not advocate for any of them. Whether they should be applied will always be a matter of debate, and greatly dependent on what you are working on.
Laws can be opinions on inevitabilities in the world of software engineering, or wry observations on unavoidable realities.
90–9–1 Principle (1% Rule)
The 90-9-1 principle suggests that within an internet community such as a wiki, 90% of participants only consume content, 9% edit or modify content and 1% of participants add content.
Real-world examples:
- A 2014 study of four digital health social networks found the top 1% created 73% of posts, the next 9% accounted for an average of ~25% and the remaining 90% accounted for an average of 2% (Reference)
See Also:
90–90 Rule
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
A wry reinterpretation of the Pareto Principe (or 80-20 rule) that highlights the real-world challenges of completing engineering work. This sentiment is also echoed in Hofstadter’s Law.
See also:
Amdahl’s Law
Amdahl’s Law is a formula which shows the potential speedup of a computational task which can be achieved by increasing the resources of a system. Normally used in parallel computing, it can predict the actual benefit of increasing the number of processors, which is limited by the parallelisability of the program.
Best illustrated with an example. If a program is made up of two parts, part A, which must be executed by a single processor, and part B, which can be parallelised, then we see that adding multiple processors to the system executing the program can only have a limited benefit. It can potentially greatly improve the speed of part B – but the speed of part A will remain unchanged.
The diagram below shows some examples of potential improvements in speed:
(Image Reference: By Daniels219 at English Wikipedia, Creative Commons Attribution-Share Alike 3.0 Unported, https://en.wikipedia.org/wiki/File:AmdahlsLaw.svg)
As can be seen, even a program which is 50% parallelisable will benefit very little beyond 10 processing units, whereas a program which is 95% parallelisable can still achieve significant speed improvements with over a thousand processing units.
As Moore’s Law slows, and the acceleration of individual processor speed slows, parallelisation is key to improving performance. Graphics programming is an excellent example – with modern Shader based computing, individual pixels or fragments can be rendered in parallel – this is why modern graphics cards often have many thousands of processing cores (GPUs or Shader Units).
See also:
The Broken Windows Theory
The Broken Windows Theory on Wikipedia
The Broken Windows Theory suggests that visible signs of crime (or lack of care of an environment) lead to further and more serious crimes (or further deterioration of the environment).
This theory has been applied to software development, suggesting that poor quality code (or Technical Debt) can lead to a perception that efforts to improve quality may be ignored or undervalued, thus leading to further poor quality code. This effect cascades leading to a great decrease in quality over time.
See also:
Examples:
Brooks’ Law
Adding human resources to a late software development project makes it later.
This law suggests that in many cases, attempting to accelerate the delivery of a project which is already late, by adding more people, will make the delivery even later. Brooks is clear that this is an over-simplification, however, the general reasoning is that given the ramp-up time of new resources and the communication overheads, in the immediate short-term velocity decreases. Also, many tasks may not be divisible, i.e. easily distributed between more resources, meaning the potential velocity increase is also lower.
The common phrase in delivery “Nine women can’t make a baby in one month” relates to Brooks’ Law, in particular, the fact that some kinds of work are not divisible or parallelisable.
This is a central theme of the book ‘The Mythical Man Month‘.
See also:
CAP Theorem (Brewer’s Theorem)
The CAP Theorem (defined by Eric Brewer) states that for a distributed data store only two out of the following three guarantees (at most) can be made:
- Consistency: when reading data, every request receives the most recent data or an error is returned
- Availability: when reading data, every request receives a non error response, without the guarantee that it is the most recent data
- Partition Tolerance: when an arbitrary number of network requests between nodes fail, the system continues to operate as expected
The core of the reasoning is as follows. It is impossible to guarantee that a network partition will not occur (see The Fallacies of Distributed Computing). Therefore in the case of a partition we can either cancel the operation (increasing consistency and decreasing availability) or proceed (increasing availability but decreasing consistency).
The name comes from the first letters of the guarantees (Consistency, Availability, Partition Tolerance). Note that it is very important to be aware that this does not relate to ACID, which has a different definition of consistency. More recently, PACELC theorem has been developed which adds constraints for latency and consistency when the network is not partitioned (i.e. when the system is operating as expected).
Most modern database platforms acknowledge this theorem implicitly by offering the user of the database the option to choose between whether they want a highly available operation (which might include a ‘dirty read’) or a highly consistent operation (for example a ‘quorum acknowledged write’).
Real world examples:
- Inside Google Cloud Spanner and the CAP Theorem – Goes into the details of how Cloud Spanner works, which appears at first to seem like a platform which has all of the guarantees of CAP, but under the hood is essentially a CP system.
See also:
Clarke’s three laws
Clarke’s three laws on Wikipedia
Arthur C. Clarke, an british science fiction writer, formulated three adages that are known as Clarke’s three laws. The third law is the best known and most widely cited.
These so-called laws are:
– When a distinguished but elderly scientist states that something is possible, they are almost certainly right. When they state that something is impossible, they are very probably wrong.
– The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
– Any sufficiently advanced technology is indistinguishable from magic.
Conway’s Law
This law suggests that the technical boundaries of a system will reflect the structure of the organisation. It is commonly referred to when looking at organisation improvements, Conway’s Law suggests that if an organisation is structured into many small, disconnected units, the software it produces will be. If an organisation is built more around ‘verticals’ which are oriented around features or services, the software systems will also reflect this.
See also:
Cunningham’s Law
The best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.
According to Steven McGeady, Ward Cunningham advised him in the early 1980s: “The best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.” McGeady dubbed this Cunningham’s law, though Cunningham denies ownership calling it a “misquote.” Although originally referring to interactions on Usenet, the law has been used to describe how other online communities work (e.g., Wikipedia, Reddit, Twitter, Facebook).
See also:
Dunbar’s Number
“Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships— relationships in which an individual knows who each person is and how each person relates to every other person.” There is some disagreement to the exact number. “… [Dunbar] proposed that humans can comfortably maintain only 150 stable relationships.” He put the number into a more social context, “the number of people you would not feel embarrassed about joining uninvited for a drink if you happened to bump into them in a bar.” Estimates for the number generally lay between 100 and 250.
Like stable relationships between individuals, a developer’s relationship with a codebase takes effort to maintain. When faced with large complicated projects, or ownership of many projects, we lean on convention, policy, and modeled procedure to scale. Dunbar’s number is not only important to keep in mind as an office grows, but also when setting the scope for team efforts or deciding when a system should invest in tooling to assist in modeling and automating logistical overhead. Putting the number into an engineering context, it is the number of projects (or normalized complexity of a single project) for which you would feel confident in joining an on-call rotation to support.
See also:
The Dunning-Kruger Effect
The Dunning-Kruger Effect on Wikipedia
If you’re incompetent, you can’t know you’re incompetent… The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is.
The Dunning–Kruger effect is a theoretical cognitive bias which was described by David Dunning and Justin Kruger in a 1999 psychological study and paper. The study suggests that people with a low level of ability at a task are likely to overestimate their ability of the task. The proposed reason for this bias is that a sufficient awareness of the complexity of a problem or domain is required for a person to be able to make an informed opinion of their capability to work in that domain.
The Dunning-Kruger effect has sometimes been used to describe a related, but not necessarily implied effect which could be described as “The less a person understands a domain, the more they are likely to believe they can easily solve problems in that domain, as they are more likely to see the domain as simple“. This more general effect is highly relevant in technology. It would suggest that people who are less familiar with a domain, such as non-technical team members or less experienced team members, are more likely to underestimate the effort required to solve a problem in this space.
As a person’s understanding and experience in a domain grows, they may well encounter another effect, which is that they tend to overestimate the ability of others or underestimate their own ability, as they are have become so experienced in the domain. In all cases these effects are cognitive biases. As with any bias, an understanding that it may be present will often be sufficient to help avoid the challenges — as when there is awareness of a bias, more inputs and opinions can be included to attempt to eliminate these biases. A closely related bias is that of Illusory superiority.
Real-world examples:
- Apple vs. FBI: Why This Anti-Terror Hawk Switched Sides – In 2016 Senator Lindsey Graham changed his stance on Apple creating a ‘backdoor’ in their encryption of devices. Initially Graham had been critical of Apple challenging a request to create a ‘backdoor’, which he saw as necessary to investigate potential terrorist plots. However, by Graham’s own admission, as he learned more about the technical complexity of the domain, he realised that he had assumed it to be far more simple than he had realised, and that such a backdoor could have serious negative consequences. This could potentially be considered an example of the Dunning-Kruger effect – a cyber-security expert would likely understand immediately how such a backdoor could be exploited, as they have deep understanding of the domain, a layperson might assume that phone security is more similar to physical security where the practice of having a ‘master key’ for law enforcement is possible, but this analogy does not apply sufficiently well to describe modern encryption in cyber-security.
Fitts’ Law
Fitts’ law predicts that the time required to move to a target area is a function of the distance to the target divided by the width of the target.
(Image Reference: By Foobar628 at English Wikipedia, Creative Commons Attribution-Share Alike 3.0 Unported, https://en.wikipedia.org/wiki/Fitts%27s_law#/media/File:Fitts_Law.svg)
The consequences of this law dictate that when designing UX or UI, interactive elements should be as large as possible and the distance between the users attention area and interactive element should be as small as possible. This has consequences on design, such as grouping tasks that are commonly used with one another close.
It also formalises the concept of ‘magic corners’, the corners of the screen to which a user can ‘sweep’ their mouse to easily hit – which is where key UI elements can be placed. The Windows Start button is in a magic corner, making it easy to select, and as an interesting contrast, the MacOS ‘close window’ button is not in a magic corner, making it hard to hit by mistake.
See also:
Gall’s Law
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Gall’s Law implies that attempts to design highly complex systems are likely to fail. Highly complex systems are rarely built in one go, but evolve instead from more simple systems.
The classic example is the world-wide-web. In its current state, it is a highly complex system. However, it was defined initially as a simple way to share content between academic institutions. It was very successful in meeting these goals and evolved to become more complex over time.
See also:
Goodhart’s Law
The Goodhart’s Law on Wikipedia
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
Charles Goodhart
Also commonly referenced as:
When a measure becomes a target, it ceases to be a good measure.
Marilyn Strathern
The law states that the measure-driven optimizations could lead to devaluation of the measurement outcome itself. Overly selective set of measures (KPIs) blindly applied to a process results in distorted effect. People tend to optimize locally by “gaming” the system in order to satisfy particular metrics instead of paying attention to holistic outcome of their actions.
Real-world examples:
– Assert-free tests satisfy the code coverage expectation, despite the fact that the metric intent was to create well-tested software.
– Developer performance score indicated by the number of lines committed leads to unjustifiably bloated codebase.
See also:
– Goodhart’s Law: How Measuring The Wrong Things Drive Immoral Behaviour
– Dilbert on bug-free software
Hanlon’s Razor
Never attribute to malice that which is adequately explained by stupidity.
Robert J. Hanlon
This principle suggests that actions resulting in a negative outcome were not a result of ill will. Instead the negative outcome is more likely attributed to those actions and/or the impact being not fully understood.
Hick’s Law (Hick-Hyman Law)
Decision time grows logarithmically with the number of options you can choose from.
William Edmund Hick and Ray Hyman
In the equation below, T
is the time to make a decision, n
is the number of options, and b
is a constant which is determined by analysis of the data.
(Image Reference: Creative Commons Attribution-Share Alike 3.0 Unported, https://en.wikipedia.org/wiki/Hick%27s_law)
This law only applies when the number of options is ordered, for example, alphabetically. This is implied in the base two logarithm – which implies the decision maker is essentially performing a binary search. If the options are not well ordered, experiments show the time taken is linear.
This is has significant impact in UI design; ensuring that users can easily search through options leads to faster decision making.
A correlation has also been shown in Hick’s Law between IQ and reaction time as shown in Speed of Information Processing: Developmental Change and Links to Intelligence.
See also:
– Fitts’s Law
Hofstadter’s Law
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
(Douglas Hofstadter)
You might hear this law referred to when looking at estimates for how long something will take. It seems a truism in software development that we tend to not be very good at accurately estimating how long something will take to deliver.
This is from the book ‘Gödel, Escher, Bach: An Eternal Golden Braid‘.
See also:
Hutber’s Law
Improvement means deterioration.
This law suggests that improvements to a system will lead to deterioration in other parts, or it will hide other deterioration, leading overall to a degradation from the current state of the system.
For example, a decrease in response latency for a particular end-point could cause increased throughput and capacity issues further along in a request flow, affecting an entirely different sub-system.
The Hype Cycle & Amara’s Law
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
(Roy Amara)
The Hype Cycle is a visual representation of the excitement and development of technology over time, originally produced by Gartner. It is best shown with a visual:
(Image Reference: By Jeremykemp at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10547051)
In short, this cycle suggests that there is typically a burst of excitement around new technology and its potential impact. Teams often jump into these technologies quickly, and sometimes find themselves disappointed with the results. This might be because the technology is not yet mature enough, or real-world applications are not yet fully realised. After a certain amount of time, the capabilities of the technology increase and practical opportunities to use it increase, and teams can finally become productive. Roy Amara’s quote sums this up most succinctly – “We tend to overestimate the effect of a technology in the short run and underestimate in the long run”.
Hyrum’s Law (The Law of Implicit Interfaces)
With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviours of your system
will be depended on by somebody.(Hyrum Wright)
Hyrum’s Law states that when you have a large enough number of consumers of an API, all behaviours of the API (even those not defined as part of a public contract) will eventually come to be depended on by someone. A trivial example may be non-functional elements such as the response time of an API. A more subtle example might be consumers who are relying on applying a regex to an error message to determine the type of error of an API. Even if the public contract of the API states nothing about the contents of the message, indicating users should use an associated error code, some users may use the message, and changing the message essentially breaks the API for those users.
See also:
Input-Process-Output (IPO)
Input–Process–Output on Wikipedia
Systems can be incredibly complex, but can typically be broken down into smaller parts that follow a simple pattern:
- Input is provided
- Some kind of processing or transformation is performed
- Output is returned
A sort function in a programming language or system could be a classic example of the IPO pattern; where arbitrary input is sorted based on a predicate and returned back. A web server could be modelled as an IPO system, where HTTP requests are transformed into HTTP responses. A highly complex Generative AI system could likewise be modelled in this way, with user input being passed through a complex model and a response being generated.
The IPO pattern is present in different forms across almost all technological domains, from functional programming languages that explicitly follow IPO patterns to The Unix Philosophy, which suggests that highly complex systems can be built by chaining together many simple IPO programs.
See also:
Kernighan’s Law
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
(Brian Kernighan)
Kernighan’s Law is named for Brian Kernighan and derived from a quote from Kernighan and Plauger’s book The Elements of Programming Style:
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?
While hyperbolic, Kernighan’s Law makes the argument that simple code is to be preferred over complex code, because debugging any issues that arise in complex code may be costly or even infeasible.
See also:
Linus’s Law
Given enough eyeballs, all bugs are shallow.
Eric S. Raymond
This law simply states that the more people who can see a problem, the higher the likelihood that someone will have seen and solved the problem before, or something very similar.
Although it was originally used to describe the value of open-source models for projects it can be accepted for any kind of software project. It can also be extended to processes – more code reviews, more static analysis and multi-disciplined test processes will make the problems more visible and easy to identify.
A more formal statement can be:
Given a large enough
9 Comments
JSR_FDED
So many people get Occam’s razor wrong. I like the way you describe it as the least number of concepts and assumptions, rather than just “simplest”.
onionisafruit
Sometimes I see a post and immediately know it’s going to take over my day. I’ll probably spend at least three hours reading this and the linked wikipedia articles.
esafak
To the author: Nice site. The very first link, https://effective-shell/, is broken.
mergy
Please add
Jevons Paradox
bookofjoe
To the author: "Download the PDF eBook" link
https://github.com/dwmkerr/hacker-laws/releases/download/v0….
is broken
PhilippGille
Good collection. One rule I found missing is the Rule of Three: https://en.m.wikipedia.org/wiki/Rule_of_three_(computer_prog…
khaledh
I'd add Wirth's Law: Software gets slower faster than hardware gets faster.
from-nibly
Don't forget WET (write everything twice)
khaledh
Hoare's Law of Large Programs: Inside every large program is a small program struggling to get out.