Developer Relations

Bitcoin Prequel: How Hacker and Cypherpunk Culture Was Born

2019-10-27
Developer Relations
en

This article is excerpted from a report titled “What’s Really Driving the Cryptocurrency Phenomenon” written by Iterative Capital, a cryptocurrency management company headquartered in New York. Iterative Capital is an investment management company focused on mining and operates the North American cryptocurrency OTC trading platform i2 Trading. Chain News editors strongly recommend this information-rich and detailed long-form report to readers. The report examines from multiple perspectives including history, social change, and changes in the commercial software landscape, carefully tracing the root causes of Bitcoin and other cryptocurrencies and their potentially irreversible socio-economic impacts.

Due to the length of the report, Chain News has excerpted parts of it for publication. The content we excerpted can be considered a “Bitcoin prequel,” providing an in-depth analysis and examination of the subcultural trend of hackers resisting the oppressive and morally controversial management and employment practices of traditional companies that emerged before Bitcoin. Understanding this content is essential for truly understanding the sociological significance of the massive impact brought by Bitcoin and other cryptocurrencies.

Readers who wish to read the full report can access it here.

Chain News thanks the report authors Chris Dannen, Leo Zhang, Martín Beauchamp, and the Chinese translator Katt Gu!

Bitcoin Prequel: How Hacker and Cypherpunk Culture Was Born

Re-understanding the Historical Context of the Cryptocurrency Phenomenon

Let this historical context tell you: Why did hackers set out to build digital currency systems?

Corporations have neither bodies to be punished, nor souls to be condemned; they therefore do as they like. — Edward Thurlow, Lord Chancellor of Great Britain, 1778-1792 [1]

Satoshi Nakamoto was the first participant in the network he built. Moreover, he left a message in the first data “block” produced by Bitcoin. The message in this so-called genesis block is as follows:

Figure 1: The message Satoshi Nakamoto left in the Bitcoin genesis block. (Source: Trustnodes 2)

This news headline first appeared in the British newspaper The Times (see below). This note also caused widespread misunderstanding about Satoshi Nakamoto’s purpose in creating Bitcoin.

Given our understanding of Satoshi Nakamoto’s motivation to create a free economic space outside the scope of institutional supervision, this message seems to reveal the intricate relationship between politicians and central bank governors. Many people use this hint to infer that Bitcoin was specifically created as some kind of disruptor or destroyer of central banks. Viewed this way, this headline seems to be a statement of superiority or self-righteousness.

We believe this is an incorrect characterization. If Bitcoin one day evolves into a large-scale alternative currency system, then Satoshi Nakamoto’s reference to The Times headline will also be regarded by historians as foresight, but this is not just a political statement.

Figure 2: The headline reproduced in the Genesis Block. (Credit: Twitter)

In fact, placing the news headline in the genesis block had a second, more practical purpose: as a timestamp. By copying the text from that day’s newspaper, Satoshi Nakamoto proved that the first “data block” generated in the Bitcoin network was indeed generated that day, not before. He knew Bitcoin was a new type of network, so most potential participants didn’t believe it was real. So initially, sending a signal that could prove his integrity and reliability to those who might join was very important. Getting more volunteers to join the project was a top priority, more important than mocking those central bank officials.

For investors outside the technology industry, understanding this volunteer-based way of working is crucial for understanding why Bitcoin operates this way and why it improved traditional human collaboration methods.

To achieve this goal, we must first explore the origins of the “war” Satoshi Nakamoto participated in and how the invention of Bitcoin changed the tide.

The Old Grudge Between Technicians and Managers

Over the past 50 years, technology companies have increasingly disagreed with the engineers who build their critical systems. Recent headlines also reflect this phenomenon: at Microsoft, Amazon, and Salesforce, employees generally oppose the company’s contracts with Customs, Border Patrol, and ICE [3]. Google’s Maven AI project contract with the Department of Defense also sparked protests from some employees because the contract could be used to improve the accuracy of drone strikes; although Google eventually withdrew from the Maven project, it stated it would continue to cooperate with the US military on other projects [5][6]. Google’s announcement agreeing to censor search results in China led to a joint protest by more than 1,400 employees [7]. Two former Microsoft employees recently sued Microsoft, claiming they developed post-traumatic stress disorder after being exposed to large amounts of child pornography while working as “content moderators” at Microsoft [8]. YouTube employees described their work as “daily moral debate hell” [9]. Facebook faces dissatisfaction from tens of thousands of employees about gentrification, and recently protests against its “intolerant” political culture [10][11].

Other abuses of technology systems include Equifax’s personal data breach, and Wells Fargo using its computer system’s account creation privileges to forge customer signatures to open new accounts or issue debit cards to achieve aggressive sales targets [12][13]. Perhaps the worst example of abuse of enterprise software systems is Compas, a manufacturer of automatic sentencing software used by some court systems, which has been shown to propose different imprisonment recommendations based on the race of the criminal [14].

The tension between software developers and their employers has spread from Silicon Valley to mainstream news. “This engineer’s lament is a microcosm of a sweeping trend across the San Francisco Peninsula,” Vanity Fair reported in August 2018 [15]:

In the quiet days of Silicon Valley, employees had no doubts about the ethical conduct of the companies they joined, because many genuinely believed they would drive the development of a company that could change the world. Those who helped transform the Bay Area into the greatest wealth-generating machine in human history, and who became millionaires and billionaires in the process, are turning their backs on the hegemonic enterprises that describe themselves as moving fast, breaking rules, and never-ending.

The article also quoted an unnamed Uber executive who worried that ethical issues would lead to a collective resignation of engineers: “If we can’t hire any excellent engineers, we’re done.”

This is an important moment in the business world, where “excellent engineers” suddenly have influence over the meritocracy of some of the largest companies in world history. This development did not happen overnight; it originated from tensions decades ago.

Next, we’ll look at how the balance of power was broken and how Bitcoin further tilted the situation toward these “excellent engineers.”

To understand how engineers gained the upper hand, we must start from the early 20th century and learn how managers and engineers first came into conflict.

The Emergence of the Corporate System (1900-1929)

Studying human behavior in business contexts has a rich tradition. Perhaps the first person to take a meaningful step in this field was Frederick Winslow Taylor. “Taylorism,” his concept of management science, was about rational planning, reducing waste, data analysis, and standardizing best practices [16]. Using these techniques, business owners maximized the exploitation of workers. Andrew Carnegie was one of the business owners obsessed with improving worker productivity. Dissatisfied with the 1892 Homestead plant strike, he hired a private police force to shoot at striking workers [17].

Thorstein Veblen was a Norwegian-American economist who published his groundbreaking research on management science practitioners in 1904. He created a series of insights about the nature of “institutions,” distinct from the “technology” they used. This distinction is a good starting point for understanding the problems faced by those who create new technologies within institutions [18].

An important aspect of Veblen’s concept of “institutions” is that they are inherently non-dynamic; they resist changes that are unfavorable to those at the top of the hierarchical structure. Hierarchies are perpetuated through what Veblen called “ceremonial aspects,” where traditional privileges help elevate the status of decision-makers. These are the new technological tools and processes that make the institution profitable. However, so-called “fake” tools may also be produced because they have ceremonial aspects that make management look or feel good [19].

After the Great Depression, historian and sociologist Lewis Mumford proposed the idea that “technology” has a dual nature. The development of diverse technologies involves complex frameworks combining technologies that solve actual human problems; the development of unitary technologies is technology itself [20]. Mumford argued that unitary technologies oppress humans. The automobile is one such example, pushing pedestrians and cyclists off the roads, resulting in a large number of deaths on American highways every year.

The institutions, companies, and governments of that time, Mumford called megamachines. Mumford said megamachines are composed of many people, each playing a special role in a larger bureaucracy. He called these people “servo-units.” Mumford believed that for these people, the specificity of work weakened their psychological barriers against questionable orders from leadership, because each person is only responsible for a small aspect of the machine’s overall goal. At the top of a megamachine sits a corporate successor, dictator, or commander endowed with god-like attributes. For this, he cited the examples of personal cults of Egyptian pharaohs and Soviet dictators.

Mumford said that ceremonial, fake, unitary technological development could lead to extremely deadly megamachines, like the Nazi war machine. This phenomenon mainly arises from abstracting work into subtasks and specialties (such as assembly line work, radio communication). At the same time, this abstraction allows servo-units to engage in some extreme or heinous projects without moral involvement, because they are just a small step in a larger process. Mumford called the servo-devices in this machine “Eichmanns,” named after the Nazi official responsible for coordinating the logistics of German concentration camps during World War II.

In the early 20th century, Fordism, Henry Ford’s practices, had a huge impact on the new field of “management science.” The main features of Fordism were: efficiency, specialization, mass production, reasonable working hours, and higher wages [21]. However, when the Great Depression came, business owners like Ford laid off tens of thousands of workers. Wages fell, but the punitive nature of work remained.

In August 1931, Ford Motor Company laid off 60,000 workers. Less than a year later, security guards fired on thousands of picketers, killing 4 and injuring 25. Henry Ford installed machine guns near his home and equipped guards with tear gas and extra ammunition [22]. As the 1930s progressed, American workers continued to riot, protesting the tactics of ruthless business owners.

The Emergence of Modern Management Methods to Protect Workers (1930-1940)

After the Great Depression, a group of professionals emerged who took major business decision-making power from business owners. Industries were managed by professional managers who executed plans in the best interests of business owners and employees. Their positions and power came from their abilities, not their ownership percentages. In this new structure, greedy shareholders could be greatly constrained [23]. Harvard economics professor John Kenneth Galbraith studied this phenomenon at the time:

Power was transferred from one man - no women, or not many women - to an institution, a bureaucracy, which is the modern corporation: a great bureaucracy, which I named the technostructure. Shareholders are an irrelevant fixture; they give ownership and symbolic meaning to capitalism, but when it comes to the actual operation of the company… they rarely exercise power [24].

This “bureaucracy” of the technostructure was composed of upper-level managers, analysts, administrators, planners, managers, “back office” operations personnel, sales and marketing personnel, controllers, accountants, and other non-technical white-collar employees [25].

In 1937, Nobel Prize winner Ronald Coase, building on the views of management scientists, proposed a theory of why these large companies emerged and why they had so many workers. Coase argued that this behavior was rational for the purpose of reducing transaction costs. He wrote:

The source of the benefits of owning a business is that market operations require certain costs, and by forming an organization and allowing management to determine resource allocation, these costs are saved [26].

In other words, in the process of hiring technical workers, keeping paid workers who come back every day is cheaper than going out every day to select new temporary candidates from “market” contractors. He continued [27]:

When the cost of a business is lower than the cost of transacting through the market, businesses emerge to organize market transactions.

Companies are the most efficient way to mass-produce and distribute consumer goods: they bundle supply chains, production facilities, and distribution networks together through centralized management [28]. This improved efficiency and productivity, reducing marginal costs, making goods and services cheaper for consumers.

Exploitation of the Engineer Class by Management Bureaucracy (1940-1970)

By 1932, most companies were effectively no longer controlled by their major shareholders, which economists defined as “management control” [29]. The management trend called “separation of ownership and control” spread among major listed companies.

Since the 1930s, the moral hazard of management-controlled companies has become increasingly apparent. Management-controlled companies were managed by executives who, despite not having many shares, ultimately achieved a “self-perpetuating control position” in policy because they could manipulate the board through agents and majority shareholder voting [30]. These mechanisms sometimes caused high levels of conflict. In the early 1940s, a view emerged that this structural differentiation in the business world was being imitated in politics and other fields, and a distinct elite “management class” would also appear in society [31].

Institutional economists distinguished between the management class and the “technical operator” class (those who do the work, in many cases engineers and technicians). The management elite was composed of “analysts” or “experts” serving as bureaucratic planners, budget allocators, and non-technical managers [32].

Between 1957 and 1969, a strange power dynamic emerged between analysts and technicians in computer companies; industrial economists in both the UK and the US studied this dynamic [33]. They found that analysts would fight for power, thereby creating conflict. They won favor and influence in companies by expanding departments, creating opportunities to hire more direct reports, or obtaining new promotions (a strategy called “empire building”) [34]. The overall impact on the organization was misallocation of resources and huge growth pressure [35]. Sales and development cycles continued to accelerate. The slogan of computer analysts was, “If it works, it’s obsolete.” “Analysts have a vested interest in change” [36].

This dynamic caused organizational dysfunction. Despite technical limitations, managers used various social strategies to enforce their will and agendas. This reflected Veblen’s description of “ceremonial” institutions 75 years earlier [37]. These strategies included:

Organizational inertia: New and threatening ideas were blocked by “idea killers” such as “the boss doesn’t like it,” “that’s not policy,” “I don’t have the authority,” “never been tried,” “we’ve always done it this way,” and “why change something that works?”

Budget games: “Foot-in-the-door method,” meaning a new program is sold modestly, hiding its true scale; “hide the ball,” meaning hiding a politically unattractive plan in an attractive plan; “divide and conquer,” meaning requiring more than one supervisor to approve budget requests; “free gift,” meaning claiming others will pay for the project so the organization can approve it; “vertigo,” meaning a request is supported by a large amount of data but arranged in a way that doesn’t clarify its importance; “delayed payment,” meaning stating when delaying submission of deliverables that it’s because budget guidelines require excessive detailed calculations; and many other strategies.

These 1960s stories foreshadowed the appearance of the popular cartoon character Dilbert in the 1990s, aimed at mocking absurd management methods. Its author, Scott Adams, worked as a computer programmer and manager at Pacific Bell from 1986 to 1995 [38].

Figure 3: Dilbert captures the frustration of software engineers in corporate environments [39]. (Source: Scott Adams)

Group Identity Developed Among Professional Technicians (1980-2000)

The authoritarian behavior of the management class masked the true balance of power in technical organizations.

In the 1980s, the entire value of many industrial giants depended on their technical staff. But their roles placed them in a strange position completely different from the rest of the organization. They were placed at the edges of the organization, closest to the work, far from the company’s top management and the power struggles within it. Since technicians didn’t work directly with managers, they had much less identification with the company’s top management compared to managers who reported directly to the top [40].

Technicians’ work was enjoyable to themselves but completely opaque to the rest of the organization. A power dynamic gradually emerged between technical operators and everyone else in the company; their work was difficult to supervise and was always carried out capriciously in ways that reflected their personal preferences [41].

The ability of technicians to work this way came from the key skills they possessed. These skills acted as a wedge within the organization, bringing considerable freedom to technical operators. When technical operators provided a much-needed skill, the effect of this wedge was enhanced, providing them with job mobility. In this case, their dependence on the organization was reduced. Compared to “professional ideology” or belief in the profession and its norms, corporate ideology was usually not a powerful force among technicians [42]. Top technical experts gradually became outsiders in their own companies.

Technicians no longer loyal to the company or CEO, but instead made loyalty to end users or customers their professional goal. In a company, technicians always focused on the needs of existing customers, while analysts and managers (whose job was not to deal directly with end users) focused more on abstract goals like efficiency and growth [43].

The Emergence of the Hacker Movement

The hacker movement originated from software makers at MIT in the 1960s [44]. The hacker movement focused on practical, useful, and excellent software, so it was seen as a remedy for the internal management chaos of old technology companies and spread rapidly across the United States in the 1980s and 1990s [45]. MIT software activist Richard Stallman described hackers as playful but diligent problem solvers who took pride in their ingenuity [46][47]:

Their main common ground is a love of excellence and programming. They want the programs they use to be as good as possible. They also want these programs to do interesting things. They want to do things in a more exciting way than anyone could imagine and show others “look how great this is. I bet you didn’t believe this could be done.” Hackers don’t want to work; they just want to play.

At a 1984 conference, a hacker who had worked at Apple building the Macintosh described hackers as follows: “Hackers can do almost anything, become hackers. It’s not necessarily high-tech. I think it has to do with craftsmanship and paying attention to what you’re doing” [48].

The hacker movement was not unlike the Luddite movement of the early 19th century, when cotton and wool artisans in the English Midlands rose up to destroy Jacquard looms that threatened to automate the textile industry [49]. Unlike the Luddites who didn’t propose better alternatives to the looms, hackers came up with another way to make software and used this method to produce better products than their commercial alternatives. By using the internet for collaboration, volunteer development teams could begin producing software that rivaled national and corporate products [50].

The Emergence of New Jersey Style

“New Jersey style” hacking was initiated by AT&T Unix engineers located in the New Jersey suburbs. AT&T reached a settlement with the US government in 1956 that prevented it from entering the computer business; therefore, throughout the 1960s, it was free to spread the computer operating system it built, called Unix, to other private companies and research institutions. These institutions would regularly modify its source code to run on specific minicomputers. Soon, rewriting Unix became a cultural phenomenon in R&D departments of major US companies.

Several development groups rewrote Unix for personal computers. Linus Torvalds created his own version, “Linux,” and released it for free, just as AT&T released Unix for free (as described below, Linux was hugely successful.) The approach taken by Torvalds and other Unix hackers was to use playfulness as an incentive to build useful free software projects [51]. At the time, Finnish computer scientist and philosopher Pekka Himanen wrote: “To properly practice the Unix philosophy, you must be loyal to excellence. You must believe software is a craft worthy of all your wisdom and passion” [52].

Developers Realized “Worse Is Better”

Besides New Jersey style, software engineers also developed a special set of design principles that ran counter to the perfectionism of institutionalized software. The old method always said to build “the right thing,” but this approach not only wasted time but often led to over-reliance on theory.

The “worse is better” concept was proposed by Richard Gabriel in the early 1980s and published in 1991 by Netscape Navigator engineer Jamie Zawinski, combining the essence of New Jersey style and hacker wisdom. This concept was considered a practical improvement on the MIT-Stanford hacker method. Like the MIT philosophy, “worse is better” focused on software excellence. But unlike MIT-Stanford, “worse is better” redefined “excellence” to prioritize positive feedback and adoption from real-world users rather than theoretical ideas.

The “worse is better” view is that as long as the initial program’s design clearly expresses the solution to a specific problem, then the time and effort required to implement a “good” version early and adapt it to new situations will be less than the time and effort required to build the “perfect” version directly. The process of releasing software to users early and improving the program is sometimes called “iterative” development.

Iterative development allowed software to spread quickly and benefit from users’ real reactions. Programs released early and continuously improved often achieved success before a “better” version written using the MIT method had a chance to deploy. In two groundbreaking papers published in 1981 and 1982, the concept of “first-mover advantage” appeared in the software industry, around the same time Gabriel formalized his ideas about why network software is “worse is better” [53][54].

The “worse is better” logic prioritizes viral growth over adaptation and completion. Once a “good” program spreads widely, many users will be interested in improving it and making it even better [55]. Here is a shortened version of the “worse is better” principles. They caution developers against doing things that are conceptually satisfying (“the right thing”) and instead should do their best to produce an actual, functional program (emphasis):

Simplicity: This is the most important consideration in design.

Correctness: The design must be the correct solution to the problem. Simplicity is slightly better than correctness.

Consistency: In some cases, consistency needs to yield to simplicity, but it’s better to abandon those parts of the design that handle less common situations than to introduce implementation complexity or inconsistency.

Completeness: The design must cover as many important cases as possible. Completeness can yield to any other principle. In fact, as long as implementation simplicity is jeopardized, completeness must be sacrificed.

These conceptual breakthroughs must have been exciting for technicians in the early 1980s. But this excitement would soon be extinguished by rapid changes in business.

Shareholders Used Hostile Takeovers to Suppress Everyone

During the bankruptcy takeover boom of the 1980s, shareholders generally regained control of large public companies. As stock prices soared, the stock market quickly became the center of the US economy. This modern venture capital era, kicked off by investor Georges Doriot after the war, had quickly transformed into a pipeline dedicated to delivering listed companies to the market [56].

The hacker-centric environments within universities and large research companies collapsed. Researchers at institutions like the MIT Artificial Intelligence Lab were poached by venture capitalists to continue their work, but in a proprietary environment [57]. The hostile takeover trend originated in the UK a decade earlier, when some smart investors began to notice that many family businesses were no longer controlled by their founding families. Financiers like Jim Slater and James Goldsmith quietly acquired shares in these companies and eventually gained enough control to spin off and sell some of the company’s divisions. This method was called “asset stripping” [58].

In the 1980s, American bankers came up with a way to conduct large-scale financial takeovers by issuing so-called junk bonds, destroying target companies, and obtaining huge returns from selling their parts [59]. Thus, managerial capitalism ultimately lost control of enterprises and became a servant of capital markets.

The newly emerged “activist investors” represented shareholder interests. They would take action to fire and hire senior executives who could maximize stock value [60]. As the 1990s arrived, more and more hackers found their companies mired in struggles with shareholder demands, threats of hostile takeovers, and competition from new Silicon Valley startups.

As they developed rapidly, technology companies also invented some management methods to execute policies and resource allocation. Microsoft and other companies adopted strict “stack ranking” systems, regularly assigning scores to employees through “performance review” processes to determine promotions, bonuses, and team assignments. Some lower-ranked employees would be fired. This system is still used by technology companies today, but Microsoft abolished it in 2013 [61]. Google recently adopted stack ranking to determine promotion eligibility but didn’t fire lower-scoring employees [62]. Due to the distorted power dynamics it created, the stack ranking system has always been hated by employees of these large companies [63][64].

Today, investors demand that the companies they invest in accurately predict profitability every quarter, with less attention to capital investment. Tesla founder Elon Musk detailed in a blog post how quarterly guidance and short-termism undermine the long-term prospects of high-tech companies [65]. According to the Business Roundtable, a corporate consortium chaired by JPMorgan Chase CEO Jamie Dimon, quarterly guidance has “adversely affected long-term strategic investment” [66].

Key Points

Above, we examined how management in the 1940s made the lives of high-tech workers constrained in everything, and how these patterns persisted into the 1990s, depriving technical workers of their rights. We discussed a powerful “union” identity that transcended loyalty to employers. This identity was inseparable from the development of hacker culture and its principles.

Next, we’ll explore how resentment toward the management class became widespread suspicion of all institutional oversight, and how their struggle to escape from under this supervision rose to a moral level. We’ll examine why this group of hackers determined to build new tools outside the control of the management class viewed cyberspace and cryptography as sanctuaries. In this process, we’ll consider the amazing success of the free software tools created by hackers, and also discuss how business owners fought against or tried to imitate hackers’ methods. In conclusion, we explored how hacker cultural ideals were realized through the Bitcoin network.

Re-understanding Hacker Organizations and How They Organize

How did hackers build their own private economic systems?

Every good work of software starts by scratching a developer’s personal itch. —Eric S. Raymond, speaking at the Linux Kongress, Würzburg, Germany, 1997.

In this section, we’ll explore how the World Wide Web enabled hackers to gather on message boards and mailing lists, where hacker groups slowly began to form scale. We’ll review their ambitions to build private networks and how they used the experience of previous decades to formulate requirements for building such networks.

Hackers Started Developing “Free” Software

From hacker culture, an informal collaborative software production system independent of any company was born [67]. This social movement was called the “free” or “open source” software movement (abbreviated FOSS), aimed at promoting certain ethical priorities in the software industry. Simply put, the free software movement encouraged free licensing and opposed companies collecting or monetizing data about users and how users use specific software.

In the software industry, the word “free” doesn’t refer to software retail price, but to software that can be “freely” distributed and modified. This freedom to create derivative works was philosophically expanded to mean “not monitored, nor monetizing user data through privacy infringement.”

What exactly is the connection between software licensing and surveillance? Here’s a description of commercial software from the Free Software Foundation [68]:

If we make a copy of (commercial software) and give it to a friend, if we try to figure out how this program works, if we put a copy on multiple computers in our own home, we might be arrested, fined, or imprisoned. This is the detailed content of the license agreement you accept when using proprietary software. The companies behind proprietary software often monitor your activities and restrict you from sharing software with others. Because our computers control most of our personal information and daily activities, proprietary software is an unacceptable danger to a free society.

Although the Free Software Foundation drew on the philosophy of 1970s hacker culture and academia, its founder, MIT computer scientist Richard Stallman, formally launched the free software movement in 1983 by releasing the free open source software tool GNU (it wasn’t until 1991 when Linus Torvalds’ kernel was released that a complete operating system appeared, making GNU/Linux a true alternative to Unix [69].)

Stallman founded the Free Software Foundation in 1985. This forward-looking act anticipated the potential infringement of user personal data that platforms like Facebook might bring. In 2016, the Facebook data breach led to the data of more than 87 million Facebook users worldwide being leaked to Cambridge Analytica [70]. In 2018, a security vulnerability allowed attackers to steal Facebook access tokens, thereby taking over the accounts of more than 50 million Facebook users [71].

The GNU Manifesto explicitly called corporate work arrangements a waste of time. Part of it reads:

We define free software as “no monetization technology that violates user privacy.” In most cases, free software has no commercial defects, including: restrictive copyrights, expensive licenses, and restrictions on changes and redistribution. Bitcoin and Linux are both free software in both senses: neither monitored nor freely distributed and copied.

Free software developers formed a value system that distinguished them from proprietary software companies; the latter never shared their internal innovations for others to use, and would monitor user behavior and sell user personal data.

Stallman’s criticism of commercial software mainly focused on two aspects: non-productive competition and data monetization:

The paradigm of competition is a race: by rewarding the winner, we encourage everyone to run faster… [but] if the runners forget the reason for the reward and become focused on winning, they might find other strategies, such as attacking other runners. If the runners fight, they will all be late. Proprietary secret software is morally equivalent to those fighting runners… Wanting to be paid for work or maximize income is not wrong, as long as destructive means are not used. But the usual means in the software field today are built on destruction. Profiting from users by restricting their use of programs is destructive, because these restrictions reduce the amount and way programs are used. This reduces the wealth humanity can obtain from these programs. When restrictions are intentionally chosen, their harmful consequences are deliberate destruction [73].

The “non-productive work” Stallman mentioned can be traced back to Veblen’s concept of “fake technology.” These technologies refer to technologies developed to serve certain internal ceremonial purposes, aimed at reinforcing existing corporate hierarchies [74]:

Fake “technological” development… refers to those technologies encapsulated within ceremonial power systems; these systems’ main concern is controlling the use, direction, and consequences of this technological development, while playing the role of institutional tools that define the boundaries and edges of this development through the special dominating effects of legal systems, property systems, and information systems. These boundaries and edges are usually set to best serve the institutions seeking this control… This is how ruling and dominating institutions in society maintain and attempt to expand their hegemony over people’s lives.

Hacker Principles Were Written into “The Cathedral and the Bazaar”

In 1997, as the web flourished, hacker Eric Raymond proposed a metaphor to describe the way hackers developed software together. He compared the hacker method, which relied on voluntary contributions, to a market where participants could interact as they pleased: a bazaar.

He said commercial software is like a cathedral, emphasizing central planning and grand, abstract concepts. Like cathedrals, commercial software is often over-designed, slow, and lacks personal touch in design. He claimed that hacker software has strong adaptability and can serve more audiences, like an open bazaar.

Based on this metaphor, Raymond summarized 19 lessons about good practices he learned while participating in free software development [75]. Some of these lessons are as follows:

  • Every good work of software starts by scratching a developer’s personal itch.
  • When you lose interest in a program, your last responsibility is to hand it over to a competent successor.
  • Treating your users as co-developers is the easiest way to achieve rapid code improvement and effective debugging.
  • Given a large enough group of beta testers and co-developers, almost every problem can be quickly characterized and intuitively solved by someone.
  • Often, the most striking and innovative solutions come from realizing your concept of the problem is wrong.
  • Perfection (in design) is achieved not when there is nothing more to add, but when there is nothing more to take away (from Antoine de Saint-Exupéry).
  • Any tool should work in the expected way, but truly great tools can be used for some purposes you never anticipated.
  • If the person responsible for coordination in development work has a communication medium as good as the internet and knows how not to lead through force, more leaders are better than a single leader.

These ideas describe in very specific ways the method hackers use to build software.

Hacker Subcultures Collided in Cyberspace

As the web developed further, hacker subcultures collided on message boards and forums. All these hacker subcultures had a core set of shared attitudes and behaviors, including:

  • Sharing software and information
  • Exploring freedom
  • The right to fork software [76]
  • Dislike of authority
  • Playfulness and cleverness

But they had different ideas about how the future internet would develop.

As early as 1968, utopian ideas about the power of computer networks to create a post-capitalist society appeared [77]. Utopians believed that networked computers might allow social life in an Eden, coordinated by autonomous computer agents, without labor, coexisting with nature [78][79].

There were also dystopian ideas. A young novelist William Gibson first coined the term “cyberspace” in his 1981 short story “Burning Chrome.” In his concept, cyberspace was a place where large corporations could operate unscrupulously. In the story, hackers could truly enter cyberspace, traversing powerful systems that could crush human thought. Gibson envisioned that in cyberspace, governments were powerless to protect anyone; there were no laws, and politicians were irrelevant. Only the raw and savage power of modern corporate conglomerates [80]. Gibson, Bruce Sterling, Rudy Rucker, and other writers formed the core of this extreme dystopian literary movement.

Utopians Started Gaining Wealth

Another group of hackers came from the counterculture of the 1960s. Many of them had a more optimistic view of the web, seeing it as a new safe world where radical things could be realized. Like anti-corruption culture, cyberspace might be a place that liberated individuals from old and corrupt power hierarchies [81].

This optimistic idea spread through Silicon Valley entrepreneurial circles in the 1980s and 1990s, creating a positive attitude toward technology, seeing it both as a force for good and a path to wealth. A British scholar at the time wrote [82]:

This new belief emerged from the wonderful fusion of San Francisco’s cultural bohemianism and Silicon Valley’s high-tech industry… It mixed hippie free spirit with yuppie entrepreneurial enthusiasm. This fusion of opposites was achieved through a deep belief in the liberating potential of new information technology. In digital utopia, everyone would be both trendy and rich.

This “old hippie” thinking reached its climax in 1996 with the publication of the “Declaration of the Independence of Cyberspace.” The declaration was written by John Perry Barlow, a former lyricist for the American rock band Grateful Dead, who was part of the anti-corruption culture [83]. By the mid-1990s, Silicon Valley startup culture and the newly founded Wired magazine began to rally around Barlow’s utopian vision of the World Wide Web. He began holding gatherings he called Cyberthons, trying to bring these movements together. Barlow said they inadvertently became a hotbed for entrepreneurship:

As envisioned, [Cyberthon] was like the acid test of the 90s, and we had considered getting some of the same people involved. But it immediately acquired a financial, commercial nature, which was a bit unsettling for old hippies like me at first. But when I saw it start working, I thought: oh, okay, if you’re going to do an acid test for 90s people, it’s better to have some money [84].

The Emergence of the Cypherpunk Movement

While utopians believed everyone would become “trendy and rich,” dystopians believed, as William Gibson envisioned, that the consumer internet would be a prison of corporate and government control and surveillance. They began to save themselves from it.

They found a potential solution in cryptographic systems that could be used to escape surveillance and control. Tim May, then an assistant chief scientist at Intel, wrote “The Crypto Anarchist Manifesto” in 1992 [85]:

The technology of this revolution - which is definitely a social and economic revolution - has existed in theory for the past decade. These methods are based on public key encryption, zero-knowledge interactive proof systems, and various software protocols for interaction, authentication, and verification. To date, the focus has been on academic conferences in Europe and the US closely monitored by the National Security Agency. But until recently, computer networks and personal computers have reached speeds sufficient to make these ideas feasible in reality.

Regulators until recently classified strong cryptography technology as weapons-grade technology. In 1995, a prominent cryptographer sued the US State Department over export controls on cryptography technology, after the US government ruled that floppy disks containing encryption system source code were legally placed on the munitions list along with bombs and flamethrowers, and their export required prior State Department approval. The State Department ultimately lost the case, so now cryptography code can be freely transmitted [86].

Strong encryption has a distinctive property: it’s simpler to deploy than to destroy. For any man-made structure, whether physical or digital, this is a rare quality. Until the 20th century, most “secure” man-made facilities required a lot of time and effort to build but were easily penetrated by suitable explosives or machinery; just like castles versus siege warfare, bunkers versus bombs, codes versus computers. Princeton computer science professor Arvind Narayan wrote [87]:

For over 2000 years, evidence seemed to support Edgar Allan Poe’s claim that “human intelligence cannot create a cipher that human intelligence cannot solve.” It’s like a cat-and-mouse game, where those with more skills and resources always have the advantage. However, due to three independent developments, this suddenly changed in the 1970s: symmetric cipher DES (Data Encryption Standard), asymmetric cipher RSA, and Diffie-Hellman key exchange.

By the 1990s, he said:

For the first time, some encryption algorithms had clear mathematical evidence (though not mathematically provable) of their strength. These developments occurred on the eve of the microcomputing revolution, when computers were gradually seen as tools of empowerment and autonomy, rather than tools of the state. These were the seeds of the “encryption dream” [88].

Cypherpunks were a subculture of the hacker movement, mainly focused on cryptography technology and privacy. They had their own manifesto (written in 1993), their own mailing list (1992 to 2013), and membership reached 2000 [89]. Here is an abridged version of the Cypherpunk Manifesto. In the last few lines, it declares the need for a new digital currency system as a way to obtain privacy from institutional surveillance.

The Cypherpunk Manifesto

The term “cypherpunk” is actually a play on words. It derives from the term “cyberpunk,” a subgenre of science fiction pioneered by William Gibson and his contemporaries [90]. The Cypherpunk Manifesto reads:

So it can be seen that privacy in an open society requires anonymous transaction systems. Currently, cash is such a system. Anonymous transaction systems are not secret transaction systems. In an anonymous system, individuals only disclose their identity voluntarily; this is the essence of privacy. Privacy in an open society also requires cryptography… We cannot expect governments, corporations, or other huge, faceless organizations to grant us privacy out of their conscience. They will definitely judge us, and we should expect them to do so. To resist their speech is to fight against the nature of information. Information doesn’t just want to be free, information will be free. Information is destined to expand and occupy all available storage space. Information is rumor’s brother, young and strong; information is fast-running footsteps, with more eyes and richer knowledge than rumor, but less understanding. We must defend our privacy. We must work together to build systems that can handle anonymous transactions. For centuries, people have protected their privacy through whispers, night, envelopes, closed doors, secret hand signals, and mailmen. Past technologies could not support reliable privacy, but electronic technology can. We cypherpunks will dedicate ourselves to building anonymous systems. We will defend our privacy with cryptography, anonymous mail systems, digital signatures, and electronic money.

Over the past few decades, there have been many attempts to create digital currency systems, some of which were initiated by members of the cypherpunk mailing list. Satoshi Nakamoto was a member of the mailing list; other members included Tim May, founder of crypto anarchy; Wei Dai, originator of the original concept of P2P digital currency; Bram Cohen, founder of BitTorrent; Julian Assange, founder of WikiLeaks; Phil Zimmerman, founder of PGP encryption; Moxie Marlinspike, developer of OpenWhisper protocol and Signal Messenger; and Zooko Wilcox-O’Hearn, member of Z-cash [91][92].

Cryptographic Systems Acquired “Moral Character”

Modern engineers have made many efforts to build organizations that can implement ethical codes in their fields, including:

  • 1964: The National Society of Professional Engineers Code of Ethics was published, focusing on social responsibility, namely “the safety, health, and welfare of the public.”
  • 1969: IEEE.22 Union of Concerned Scientists was founded at MIT.
  • 1982: The International Association for Cryptologic Research (IACR) was founded to promote the use of cryptography to maintain public welfare.
  • 1990: The Electronic Frontier Foundation (EFF) was founded.

The technological optimism of 1990s Silicon Valley also laid the groundwork for the industry’s growing moral pitfalls. In a 2005 paper titled “The Moral Character of Cryptographic Work,” UC Davis computer science professor Phillip Rogaway suggested that technology practitioners should carefully examine the assumption that software is inherently “good” for everyone [93]:

If you’re a technological optimist, a beautiful future comes from your work. This means limiting moral responsibility. The most important thing is to do your job well. This even becomes a moral requirement, because the work itself is your social contribution.

Rogaway suggested that technology practitioners should refocus on moral responsibility and build new encryption systems that empower ordinary people:

Nevertheless, I do believe it’s accurate to say that traditional encryption embeds the potential to empower ordinary people. Encryption directly supports freedom of speech. It doesn’t require any expensive or hard-to-obtain resources. It can be achieved through something easily shared. Individuals can use it without backdoor systems. Even the habitual language used about encryption suggests a worldview in which ordinary people - the Alices and Bobs of the world - will have the opportunity to have private conversations. From another perspective, we must work hard to embed encryption into an architecture that supports rights, and we may encounter many obstacles in this process.

“Responsible” Hackers Started Organizing Together Since the 1990s

Many free open source software projects have third-party developers contributing code to projects for altruistic reasons, integrating the improvements they make on the original version into the main branch. In this way, open source software projects can accumulate the work of hundreds or thousands of uncoordinated individuals without any central organizational intervention. This organizational form is also called the “open work assignment” method.

Open work assignment refers to a management method that gives knowledge workers extremely high freedom. In the open work assignment model, knowledge workers have the right to start or join any area of a project and decide how to allocate their time. This method is considered a form of “self-organization” and has been widely used in the free software world outside any corporate or partnership structure.

In open assignment structures, decision-making ability is generally in the hands of those closest to the problems that need to be solved. Projects have a “lead maintainer,” usually the person who has worked on the project the longest or has the most influence. Any arbiter of project direction is limited to the project’s workers [94]. If a project leader is replaced by new developers, they can choose to become a follower of the project or completely detach from the project. Unlike traditional management structures with fixed power, in open work assignment, a leader’s title is only a temporary distinction.

Brief Introduction to the Principles of Open Work Assignment

As we discussed earlier, the “analysts” who make up corporate management usually have a vested interest in change. Marketing activities may replace engineering priorities. Continuous, unnecessary changes may disrupt program functionality in unexpected ways, so poorly managed proprietary network platforms may lack stability, or experience interruptions, downtime, or “feature creep” [95].

In open source software projects using open assignment, the changes you propose must be implemented by yourself. No non-technical managers will get involved to propose flashy features; and even if someone makes such a proposal, it’s unlikely others will choose and build these features.

Proposed additions or changes are usually implemented by the proposer, and only when other maintainers of the project agree that the problem being solved is real and the solution is appropriate will the proposer be allowed to commit code.

This alternative model of organizing work relationships is considered one of the main achievements of the free and open source software movement [96].

Advantages of Open Work Assignment

Open work assignment systems have many benefits, one of which is that it minimizes “technical debt.” Technical debt is a metaphor that refers to the extra work caused by using quick, crude solutions now. In practice, meaningless feature requests, redirects, changes, poor communication, and other issues can easily lead to technical debt. Regulations and legislation imposed on software companies also generate technical debt.

In this sense, corporate management and government supervision are the same, because both are sources of coercive, ceremonial, unitary, and fake technological development and technical debt.

If technical debt accumulates, it becomes very difficult to make meaningful improvements to the project later. High technical debt systems are like a Sisyphean dilemma, because maintaining the status quo requires more and more effort, and less and less time is available for planning the future. Therefore, this type of system requires people’s unreserved commitment. Technical debt has a high human cost, as one developer described in his blog (edited for length) [97]:

Work frustration: A codebase with high technical debt means its feature delivery will be very slow, which causes a lot of frustration and embarrassment when discussing business capabilities. When new developers or consultants join the project, team members have to face the confused expressions of newcomers and the undisguised contempt in their eyes. To relate this to the technical debt metaphor, think of a person with mountains of debt trying to explain to others why they’re being harassed by creditors. This is not only embarrassing but also lowers team morale.

Team infighting: Not surprisingly, this situation often leads to arguments between teams. Again, this is like the behavior we might see in a heavily indebted married couple. Teams draw lines between themselves. They add arguments on top of the frustration and embarrassment about the problem itself.

Skill degradation: As embarrassment and finger-pointing intensify, team members can feel their professional relevance gradually losing. Overall, they want to change as little as possible, because doing so would further slow down an already delayed process. This is not only too slow but also too risky.

Technical debt is usually caused by starting projects without a clear concept of the problems that need to be solved. Therefore, when adding new features, developers may misunderstand the actual needs of target users. Ultimately, the project falls into an “anti-pattern,” that is, designs and behaviors that appear to be in the right direction on the surface but actually lead to technical debt. Anti-patterns are project and company killers because they accumulate large amounts of technical debt [98].

In contrast, in open assignment projects of global significance, the benefits of open assignment governance are maximized. These benefits include [99]:

  • Coordination: The person who conceives the work is the one who does it.
  • Motivation: You’re choosing your own project, so you’ll value it more.
  • Responsibility: Because you chose your own tasks and solved the problems yourself, if problems arise, you have no one to blame but yourself.
  • Efficiency: Free time arrangement, new collaborators can start work immediately. No bureaucracy or formalism affects your programming speed.

It turns out people like open assignment. In 2005, MIT Sloan School of Management and Boston Consulting Group conducted a study on the motivations of open source software engineers. The study reported [100]:

We found… enjoyment-based intrinsic motivation, that is, the creativity a person feels in a project, is “the strongest and most pervasive driving factor when voluntarily engaging in software development work…” Many people are puzzled by the seemingly irrational and altruistic behavior of (free software) movement participants: providing code to others, leaking proprietary information, helping strangers solve their technical problems… Free and open source software participants can maintain flow by choosing projects of varying difficulty that match their skill level, a choice not achievable in their regular jobs.

This led the management science community to recognize the evils of the 20th century. Now, they’re looking for ways to reorganize and give decision-making power to project operators!

Commercial Software Makers Reluctantly Started Following Suit

As a marketing plan for using free software within enterprises, the “open source” movement officially began to rise in 1996. It defined the use of free software in ways that businesses could understand [101].

GNU creator Stallman said the difference between free software and open source software is moral: “Most discussions about open source don’t focus on right and wrong, only on popularity and success” [102].

Regardless of the difference, facing the sudden attack of software that anyone can license, copy, fork, deploy, modify, or commercialize, traditional tech giants began to lose their footing. In 2000, Microsoft Windows CEO Jim Allchin said, “Open source is an intellectual property destroyer” [103]. In 2001, Steve Ballmer said: “From an intellectual property perspective, Linux is a cancer that attaches itself to everything it touches” [104].

But the fact is: open source and open assignment governance methods are not only mentally and physically enjoyable but also produce very successful software. In 2001, a movement to bring open assignment methods into enterprises gradually developed. This approach was called “agile development,” a last resort for commercial software companies to try to retain relevance. If they couldn’t beat open source software, they could join it, building commercial services and products on open source software. Agile development supporters imitated the previous cypherpunks and cyberspace enthusiasts and wrote a founding document. The Agile Manifesto reads in part [105]:

To succeed in the new economy, stride into e-commerce, e-trade, and the network age, enterprises must break free from the Dilbert-style busywork and obscure policies in the company. This freedom to liberate people from the poverty of corporate life attracted agile methodology supporters and scared away traditionalists. Frankly, agile methods terrify corporate bureaucrats - at least those who like to push process for process’s sake, rather than trying their best to do the best for “customers” and deliver “promised” products in a timely and practical manner - because they have nowhere to hide.

Free Open Source Unix Variants Achieved Huge Success

Microsoft eventually integrated Linux and open source technology into its enterprise Azure platform in 2012. Thus, Linux defeated Windows and other proprietary operating systems and became the foundation of the web. Currently, 67% of servers on Earth use Unix-like operating systems. Of these 67% of users, at least half run Linux. Regardless of what type of computer or phone you use, when you browse the web, you’re likely connecting to a Linux server [106].

Other free open source libraries have also succeeded in enterprise environments. Bloomberg LP uses and contributes code to the open source Apache Lucene and Apache Solr projects, which are critical to the search functionality in its terminals [107]. FreeBSD is another open source Unix alternative that is the foundation of the “user space” in macOS and iOS [108]. Google’s Android system is based on Linux [109].

BMW, Chevrolet, Mercedes, Tesla, Ford, Honda, Mazda, Nissan, Mercedes, Suzuki, and the world’s largest automaker Toyota all use automotive-grade Linux in the vehicles they manufacture. Although both BlackBerry and Microsoft have automotive platforms, they’re only used by a few automotive OEMs. As of 2017, both Volkswagen and Audi have switched to Linux-based Android platforms [110][111].

In 2018, Tesla released the open source Linux software code for its Model S and X cars, including Tesla Autopilot platform, hardware kernel source code, and infotainment system [112].

These examples demonstrate two counterintuitive lessons about software [113]:

  • A software’s success is often inversely proportional to the amount of capital behind it.
  • Many of the most meaningful advances in computer technology are the results of hobbyists working outside corporate or academic systems.

Modern Organizational Design Appeared in the Image of Hackers

Today, many software companies are trying some method to reduce reliance on management hierarchies. Spotify and GitHub are two highly successful companies completely organized through open work assignment.

Spotify produced two in-depth videos about how its independent project teams collaborate. These videos are instructive for how open assignment organizations can work together to build single platforms and products using multiple component teams without any central coordinator.

Figure 4: Spotify’s “Engineering Culture” video summarizes how open work assignment works in commercial software companies. In practice, traditional companies find it difficult to adopt this organizational design without external help. (Source: YouTube)

  • How Spotify Works, Part 1 [114]
  • How Spotify Works, Part 2 [115]

Open work assignment works similarly within companies as it does outside corporate structures, but with some exceptions. Although company-wide rankings don’t determine project assignments, they’re usually a factor in determining compensation.

The “Responsive Organization” was a movement Microsoft launched to adopt open assignment organizational design within itself and Yammer, the corporate messaging system it acquired in 2012 [116]. Currently, consulting services specializing in “organizational design” and transitioning to responsive team structures have emerged.

Finally, attempts to create “ideal engineering conditions” within companies may only last as long as the company is comfortable in its category. Google also used an open assignment governance method called “20% time” in its early days, but it was eliminated later when the company continued to develop and adopted stack ranking [117].

More research shows that in most companies, power hasn’t really shifted to “makers.” According to a research initiative by MIT Sloan Management Review and Deloitte Digital, digitally mature companies should push decision-making further into the organization, but this isn’t happening [118]. Respondents to the study said they want to continuously improve their skills, but they don’t get support from their employers to participate in new training.

This finding reflects the MIT study mentioned earlier on the motivations of open source contributors, which found that programmers like to participate in open source projects because it’s a path where they can develop new, lasting, and useful skills according to their own wishes [119].

Key Points

In this section, we introduced hacker culture and its method of creating software around a specific set of design principles and values. We explained how hacker culture developed an organizational model. And we proposed that these models can make computer software more accessible to non-professionals and non-academics, thereby disrupting the social divisions caused by strict licensing and closed source code. Additionally, we demonstrated the success of free and open source methods at a fundamental level through software like Linux and Apache.

Finally, we showed how commercial software companies tried to imitate open work assignment methods. Using free open source software, the hacker movement effectively destroyed the institutional monopoly on research and development [120]. All of these became the foundation for the birth of Bitcoin.

Readers who wish to read the full report can access it here.

[References 1-120 omitted for brevity - see original for full reference list]

Reprinted with permission: Developer Relations »


Similar Posts

Content icon
Content