Open-Source:
A Movement in Search
of a Philosophy
Manuel DeLanda
The hacker movement referred to by the term "open-source"
has burst into public consciousness in the last few years due to its spectacular
success in the production of reliable and robust software. Perhaps the most
obvious symptom of this success is the fact that open-source software in
several key areas (operating systems, server software) is the only serious
alternative to the domination of the market by large corporations like Microsoft.
Its paradigm of software production, collectively-created programs in a
process where users are also (to different degrees) developers, has gone
beyond the expectations of most analysts, and taken by surprise most corporate
managers many of which (at corporations like IBM and SUN) are rapidly switching
from proprietary standards to open systems.
The movement has also produced several authors who have tried to either
give the movement an a priori moral philosophy, or alternatively,
to distill from the actual practice of the thousands of hackers involved
a pragmatic philosophy. Thus, when I say that this movement is still in
search of philosophical foundations I do not mean to imply that it does
not have a philosophy. It has in fact several but, in contrast to the high
quality of its software products, the philosophies in question are shallow
and brittle. The purpose of this essay is not, however, to criticize these
philosophical ideas but, on the contrary, to show that their low quality
is quite irrelevant to the success of the movement, a success which, as
I just pointed out, has been measured in practice by the enormous difficulty
involved in defeating entrenched commercial interests. In a nut shell, the
moral of this essay is that what matters about the open-source movement
is not so much the intentional actions of its main protagonists, actions
which are informed by specific philosophies, but its unintended collective
consequences.
The plan of the essay is as follows. I will begin with a few definitions
of technical terms ("source code", "compiler", "operating
system") which are necessary to follow the rest of the paper. I will
then discuss a few of the ideas put forward by open-source philosophers
(Richard Stallman, Eric Raymond) focusing not on their weaknesses but on
their practical consequences. In particular, Stallman's achievements go
beyond the creation of programs and involve the design of a contract (the
GNU General Public License, or GPL) which has been arguably as crucial to
the success of the movement as any piece of software. The spirit of the
license is clearly informed by Stallman's moral philosophy but its unintended
consequences go far beyond it. Similarly, Eric Raymond's attempts at an
ethnography of the movement, and to distill "rules" which capture
its dynamics, fall short of success but he has in addition provided good
material to study those unintended consequences. Finally, I will introduce
some ideas from the field of Transaction Cost economics (also known as Neo
Institutionalist economics) which will prove useful in giving a more robust
philosophical foundation to open-source practice. Needless to say, no attempt
will be made to create a full-fledged philosophical account, not only because
it would be premature to do so, but because the movement simply does not
need one, and if such a philosophy did evolve it would have to follow a
similar path than the software itself, that is, be the collective product
of the users of that philosophy.
1. Definitions
Let me begin with a few definitions. First of all, the term
hacker refers not to the cyber-vandals on which the media has focused
so much attention (the correct term for those who illegally break into private
networks is
cracker) but to anyone who writes his or her own software.
The term does imply (although I doubt it is part of its "meaning")
that the software writer in question does not have a Computer Science degree,
that is, it typically refers to self-taught craftsmen. Thus, the term carries
the connotation that a software writer enjoys the creation of programs (as
opposed to being motivated by professional duty or economic rewards) and
that he or she has a strong respect for the values of craftsmanship (elegant
solutions to a problem are admired in and of themselves).
Next, the term
source code refers to one of the two main forms in
which a computer program may exist. When a user buys a particular application
from a commercial vendor, a word processor, for example, the program typically
has a form which is next to unintelligible to human beings but perfect for
a computer. It consists of long series of ones and zeroes which code for
specific machine instructions. When that same program is being developed,
on the other hand, machine language is not used as frequently as some other
high-level language (such as C or Pascal) which is not only readable by
humans, but which is also accompanied by comments which explain (to other
humans) what each part of the program is intended to do. It is this human-oriented
version which is referred to by the term "source code" and which,
because of its intelligibility, is used to change or further develop a particular
program. Once finished, the source code is converted into machine code by
a special program called a
compiler. It is this compiled version
which is typically sold in stores, a fact that implies that the users of
the program are not supposed to continue changing and improving it. (This
could in principle be done, by "reverse engineering" the machine
code, but in practice it is too hard to bother).
Finally, it is important to distinguish different kinds of software. Non
programmers are familiar mostly with
application programs:
word processors, accounting spreadsheets, Internet browsers, graphic design
tools and so forth. But to a hacker, it is the software which is used in
the production of those applications that matters. I already mentioned one
such program, the compiler, but there are several others: debuggers (which
allow to track errors or "bugs"), program text editors, and several
other
production tools. Then there is what is perhaps the most important
piece of software of them all, the
operating system. Unlike applications,
which are run to perform a particular task and then closed, an operating
system is always running in the background (as long as the computer is turned
on) linking all the different devices connected to the computer. (Input
devices like the mouse, storage devices like the hard disk, display devices,
like a LCD screen
etc.) No application program, or for that matter,
no production tool, can run without an operating system in the background
with which the applications or tools must be compatible. The crucial importance
of the operating system may be glimpsed from the fact that the Justice Department
investigated
Microsoft not so much because of its large
size or dominant market share, but because it produces both operating systems
(
Windows) and applications. Controlling both the platform on which
programs run as well as what runs on top of this platform gives
Microsoft
an unfair advantage over competitors who just create applications.
Microsoft
may, if it wants, delay the release of technical details on a new operating
system, therefore acquiring an unfair advantage over its rivals who must
wait to rewrite applications while
Microsoft has as much time as
it wants. (The Justice Department also prosecuted
Microsoft for
more obvious misuses of its market power, but I believe it was the joint
ownership of operating system and applications that clinched the case.)
Now, to put all these terms together, the strength of the open-source movement
lies in the fact that it has created alternative operating systems (such
as
Linux) and production tools (compilers, debuggers)
as well as the fact that it distributes these alternative programs in its
source form. In other words, the programs are distributed in a form that
lends itself to further improvement and development by its users. The term
"open-source" was coined to reflect this alternative conception
of how software should be produced, an alternative paradigm which is at
once evolutionary and collective. The other term by which the movement is
known, "free software", also refers to this freedom to change,
adapt and otherwise evolve a particular program, without the constraints
usually associated with intellectual property rights. The program in question
may in fact be sold commercially, so long as the source code and the rights
to alter it in any
form are always included. As the coiner of the term (Richard Stallman) puts
it, the term "free" is used as in "free speech" not
as in "free beer".
1
2. Unintended Consequences
This is not the place to make a detailed history of the open-source movement.
Several book-length accounts exist already, and there are many places in
the Internet where summaries are offered.
2 Instead,
I would like to focus on two items which have been crucial to the success
of the movement: the design of its license (more exactly, of one of its
licenses, the General Public License, since there are alternatives) and
the design of its production model (again, more exactly, of one of its models,
the production model behind the creation of
Linux). These
two items belong to what Transaction Cost economists call "the institutional
environment" and "the governance structures" of an economy.
3
Let me begin by stating the problem which the General Public License (GPL
from now on) is meant to address, that is, the problem of intellectual property
rights. The textbook definition of the problem begins by distinguishing
goods which can be consumed only by a given person (or persons), that is,
goods which very consumption excludes others from consuming them, and goods
that do not possess this property. Food is an example of the first type,
a good which is "rivalrous in consumption", while ideas are an
example of the second type: if someone consumes a song or a book, this act
by itself does not exclude others from consuming the same song or book,
particularly if technologies of duplication and distribution have made the
costs of reproduction minimal. The economic problem of intellectual property
is that when goods which are not rivalrous in consumption are made subject
to property rights, the exclusion aspect of these rights generates social
waste: given that additional copies of a given good may be generated and
distributed at virtually no cost (this is particularly true of goods in
digital form) excluding people from them means that wants will go unsatisfied,
wants that could have been satisfied at very little cost to society. On
the other hand, not making these goods subject to property rights means
that those producing them will have no incentive to do so, particularly
if the costs of production happen to be high. Thus the problem of intellectual
property needs to be solved by a careful balancing of social costs and producer
benefits, a balance which must be achieved case by case.
I will return below to the question of incentives to producers. Hackers
tend to think that this is a question that does not apply to them since
their work is motivated by love of craftsmanship, not to mention a deep
hatred for
Microsoft and similar corporations, but the issue is
not one to be solved that easily. The other side of the question, the social
costs of exclusion, on the other hand, is one which has received a lot of
attention from hackers like Richard Stallman, creator of the GPL. The problem
with Stallman's approach is that he over-moralizes the question. In his
treatment, intellectual property rights become an "artificial"
monopoly which interferes with "the users' natural right
to copy".
4 This takes him beyond the social
costs of waste in non-rival goods to a condemnation of intellectual property
as causing "psychosocial harm" in the sense of promoting divisiveness
in society. In his words:
Suppose that both you and your neighbor would find it useful
to run a certain program. In ethical concern for your neighbor, you should
feel that proper handling of the situation will enable both of you to use
it. A proposal to permit only one of you to use the program, while restraining
the other, is divisive; neither you nor your neighbor should find it acceptable.
Signing a typical license agreement means betraying your neighbor...People
who make such choices feel internal psychological pressure to justify them,
by downgrading the importance of helping one's neighbors - thus public spirit
suffers. This is psychosocial harm associated with the material harm of
discouraging use of the program.5
I do not wish to go into a long discussion of the philosophical problems
of Stallman's stance. Strategically, I have never thought it is a good idea
to base one's philosophy on "universal moral principles", particularly
when they involve the generalization of one's morality into everyone's morality.
The very fact that many hackers reject this moral stance should warn us
against its universality. And if the relatively small hacker community does
not identify with these moral values, one wonders where Stallman gets the
idea that he is defending "the prosperity and freedom of the public
in general."
6
The relative lack of importance of this moral stance, as opposed to more
pragmatic considerations, to the open-source movement may be clearly perceived
when one reads Stallman's justifications for pragmatic choices which apparently
break with his iron-clad morality. While developing his first few software
tools for the GNU project (GNU stands for "Gnu is Not Unix"),
and before the development of the kernel of what would be the main operating
system of the movement, Stallman had of necessity to use another operating
system (UNIX). This is the kind of pragmatic constraint that should hardly
bother anyone at a deep ethical level but for Stallman this needed special
justification:
As the GNU project's reputation grew, people began offering
to donate machines running UNIX to the project. These were very useful,
because the easiest way to develop components of GNU was to do it on a UNIX
system, and replace the components of that system one by one. But they raised
an ethical issue: whether it was right for us to have a copy of UNIX at
all... UNIX was (and is) proprietary software, and the GNU's project philosophy
said that we should not use proprietary software. But, applying the same
reasoning that leads to conclusion that violence in self-defense is justified,
I concluded that it was legitimate to use a proprietary package when it
was crucial for developing a free replacement that would help others stop
using the proprietary package... But, even if this was justifiable evil,
it was still an evil."7
I said before that criticizing hacker's philosophies was not the point of
this essay. Quotations like the one above make it too easy to dismiss the
real achievements of these people. In particular, Stallman's deep belief
in the moral value of "freedom" (and his equally strong stance
on the evil of anything that constraints that freedom) guided him in the
design of a license agreement (the GPL) which has had extremely positive
effects on the movement. These effects may be said to be unintentional consequences
in the sense that one can perfectly imagine some other hacker guided by
more pragmatic considerations coming up with basically the same idea. These
pragmatic concerns have less to do with the "evils of proprietary software"
and more with the kind of environment conducive to the creation of good
software (where "good" means "robust against crashes",
a highly desirable quality particularly in operating systems and server
software). I said before that distributing software in the form of source
code allows users to stop being passive consumers and get actively involved
in the evolution of a given product. The ability of freely changing and
adapting a given piece of software, particularly production-level software
(as opposed to end-user applications), allows the formation of development
communities within which many of the inevitable errors (or "bugs")
that are part and parcel of any complex program can be rapidly discovered
and fixed. This community-based debugging results in software that can be
conclusively shown to be more resilient against malfunction than commercially
available programs.
This, however, immediately raises the question of free-raiders: what is
going to stop a particular user (which may be an institution) from benefiting
from this shared source code, alter it a bit, then close it (that is, compile
it) and sell it as proprietary software? This is where the GPL comes in.
The terms of the license agreement are cleverly designed to exclude free-raiders,
by forcing everyone who uses previously open-source code into opening whatever
contributions he or she makes. This effect is not achieved by abolishing
intellectual property, each contributor owns the copyrights of whatever
piece of code he or she has developed, but by altering the way in which
the rights of exclusion are deployed. Exclusion, as I said before, is the
main cause of social costs for non-rival goods, so deploying this power
in a different way constitutes a novel solution to the problem. As law professor
David McGowan has argued "Open-source software production is not about
the absence or irrelevance of intellectual property rights. Open-source
production instead rests on the elegant use of contractual terms to deploy
those rights in a way that creates a social space devoted to producing
freely available and modifiable code....Open-source production, therefore
does not take place within a true commons..."
8
McGowan goes on to explain the details of this novel use of property rights:
The GNU GPL system rests on the assignment of property
rights (copyright) in an author, who is then able to grant nonexclusive,
transferable rights to community members, subject to limitations that enforce
community tenets. This structure gives subsequent users of the copyrighted
code the ability to pass along restrictions that embody open-source tenets,
resulting in the dissemination of the tenets in proportion to the distribution
of the code. The right to exclude is not abandoned; however, this model
gives the rights-holder the ability to enforce those tenets through an infringement
action if necessary."9 (my italics).
Thus, the originality of the GPL is that rather than actively exploiting
the right to exclude, as it is done in conventional licenses, this right
is held "in reserve as a method of enforcing adherence to the norms
embodied in the license".
10
In this way, the license becomes a legal instrument for community-building
(preserving and propagating the norms of a once small community allowing
it to grow and stabilize) in addition to its more immediate goals of keeping
the software open and of serving as a means to allocate credit for particular
contributions. (The GPL also mandates that the names of the creators of
specific pieces do not be removed from any future release.) The very fact
that the license acts as an "enforcement mechanism" for openness
shows how far its function is from one of just promoting "freedom"
(that is, Stallman's original intention). Indeed, when other hackers began
coming up with alternative licenses (such as the BSD license for the versions
of UNIX developed at the University of California at Berkeley) their creators
argued these licenses, which did not force the user to share, were in fact
more in line with the moral principle of "freedom" than the GPL.
This attack on the GPL's lack of freedom is normally expressed by saying
that the license is "like a virus", that it "contaminates"
all the code produced downstream from an originally open-source piece of
code.
11
Although this characterization sounds insulting to Stallman, it is in fact
devoid of any negative connotations if one accepts the role of the license
in propagating and enforcing community norms.
Let me now move on to comment on the thoughts of the other hacker-philosopher
of this movement, Eric Raymond. Unlike Stallman, Raymond does not have faith
in the power of abstract,
a-priori moral principles and prefers
to distill his values from "ethnographic" studies of the actual
practice of hackers. Even a casual examination reveals that loyalty to craft
traditions (going back to the early 1960's) and pride of craftsmanship (or
to phrase it negatively, a deep contempt for bad quality) are more important
motivators of hacker behavior than the ethical duty to help one's neighbor.
Raymond therefore espouses a more pragmatic, less moralistic, approach,
and this has led him to concentrate on an examination of the practical conditions
of success of open-source projects.
To begin with, while to an outside observer the idea of hundreds of people
dispersed around the world working on incrementally improving a program
may seem like anarchy, the actual development of specific projects (the
Linux operating system, the FETCHMAIL email program, the APACHE
server program and so on) is anything but anarchic. Each project has a leader
(or committee of leaders) who has a final say on what improvements get included
in the "official" version of the program. This unquestioned authority
of project leaders is sometimes expressed by saying that they are "benevolent
dictators", but this term is highly misleading and brings back the
morality questions which the pragmatic approach was supposed to have discarded.
A more appropriate description of the task of project leaders (other than
their contributions as writers of code) is that their role is the creation
of a community supporting a project. When commenting on the real achievements
of the leader of the
Linux project, Linus Torvalds, Raymond says
that "Linus's cleverest and most consequential hack was not the construction
of the
Linux kernel itself [the core of the new operating system],
but rather his invention of the
Linux development model".
12
This development model implied constantly releasing any new piece of code
(so that interested users could immediately begin to work on it, thereby
keeping them constantly motivated), delegating responsibility for specific
areas to motivated users (making them co-developers), promoting cooperation
through a variety of means, and being as self-effacing as possible to block
any suspicion that credit for the work done would not be shared equally,
or that decisions about the quality of a given piece of code would not be
made objectively. This is how Raymond describes the new development model:
The history of UNIX should have prepared us for what we're
learning from Linux (and what I've verified experimentally on a
smaller scale by deliberately copying Linus's methods). That is, that while
coding remains an essentially solitary activity, the really great hacks
come from harnessing the attention and brainpower of entire communities.
The developer who uses only his or her own brain in a closed project is
going to fall behind the developer who knows how to create an open, evolutionary
context in which feedback exploring design space, code contributions, bug-spotting,
and other improvements come back from hundreds (perhaps thousands) of people.
But the traditional UNIX world was prevented from pushing this approach
to the ultimate by several factors. One was the legal constraints of various
licenses, trade secrets, and commercial interests. Another (in hindsight)
was that the Internet wasn't yet good enough.13
The fact that Raymond must study these communities in action to distill
the mostly unconscious norms that shape their dynamics; that the contribution
of important factors (such as the Internet) can be discerned only in retrospect;
and that Torvalds himself seems largely unconcerned with intentional planning,
seems to confirm that much of the dynamics of the
Linux project
were unintended consequences. However, what are unintended consequences
for a project at a particular point in his history may become intentionally
built-in features for another project, or another phase of the original
project, once we analyze with the benefit of hindsight the history of the
process. This is, in fact, an important subject in the literature of Transaction
Cost economics, which I will describe below.
14
In Raymond's case, the lessons to be learned from unintended effects gravitate
around the two related issues of how the legitimacy of a project leader
is established and maintained, and how this legitimacy prevents a project
from losing its identity by diverging (or forking) into a multiplicity of
subprojects, each with its own leader. The dangers of forking cannot be
underestimated. Indeed, one of
Microsoft's main scare-tactics to
lure developers and users away from open-source projects is to use the threat
of forking: while WINDOWS is developed in a process where decision-making
is strongly
centralized, and hence may have a clear sense of purpose and long-term planning
which guarantee it will keep its identity, an operating system developed
through decentralized decision-making cannot guarantee developers of application
programs that their investment in time and resources will pay off in the
long run.
15
Let me tackle each one of these two issues one at a time. The question of
the legitimacy of project leaders (and hence of the projects themselves)
has been analyzed by Raymond, though his analysis is muddled by talk of
"ownership of a project", an expression which confuses questions
of property rights with those of legitimization of a certain process of
decision making by the leader (or maintainer) of a project. Raymond distinguishes
three separate ways in which a project may become legitimate:
There are, in general, three ways to acquire ownership
of an open-source project. One, the most obvious, is to found the project.
When a project has had only one maintainer since its inception and the maintainer
is still active, custom does not even permit a question as to who owns the
project... The second way is to have ownership of the project handed to
you by the previous owner (this is sometimes known as 'passing the baton').
It is significant that in the case of major projects, such transfers of
control are generally announced with great fanfare. While it is unheard
of for the open-source community at large to actually interfere in the owner's
choice of succession, customary practice clearly incorporates a premise
that public legitimacy is important. The third way of acquiring ownership
of a project is to observe that it needs work and the owner has disappeared
or lost interest. If you want to do this, it is your responsibility to make
the effort to find the owner. If you don't succeed, then you may announce
in a relevant place (such as a Usenet group dedicated to the application
area) that the project appears to be orphaned, and that you are considering
taking responsibility for it.16
Raymond compares these three ways of legitimizing leadership to the three
ways in which land tenure may become legitimate in the tradition of Anglo-American
common law, as systematized by John Locke (homesteading and laboring a piece
of previously unowned land; title transfers; and the claiming of abandoned
land). Although I am far from knowledgeable in the history of law, Locke's
ideas seem to me to bear more on the question of the legitimacy, not the
nature, of private ownership. But at any rate, it is clear that open-source
leaders can only be said to own their projects in a metaphorical sense,
and that the real issue is the source of legitimacy for their decisions.
Now, two challenges to that legitimacy may be mounted: the
first is to fork the project, that is, to install a new leader who will
now direct or maintain an alternative version of the program under development;
the second is to add pieces of code which have not been approved by the
project leader (these are known as "rogue patches"). Although
forking is not always necessarily a bad thing (the BSD project has forked
at least three times, but each variant has micro-specialized on a particular
aspect, such as security or portability), I already suggested that it does
have consequences on the developers of applications, who need to be reassured
that an operating system will become a stable standard. Rogue patching,
on the other hand, directly affects the central pragmatic goal of the open-source
movement, the creation of programs that are robust to crashes, because without
careful addition of patches tested and approved by a leader, there is no
guarantee that a new piece of code will not introduce bugs, and that these
will go unnoticed by other community members.
Raymond analyses the community norms which prevent forking
and rogue patching from being widespread phenomena (hence endangering the
integrity of the community) using the other component of the problem of
intellectual property: incentive. While monetary incentives to produce do
not seem to be a problem for self-motivated hackers, incentives not to destroy
the legitimacy of a project are needed. These may be understood, he argues,
if we picture these communities as involved in an economy of reputation.
That is, as if what is being "exchanged" in these communities
was not monetary values but less-tangible values such as peer-recognition
of one's skills and contributions. In his words:
Forking projects is bad because it exposes pre-fork contributors
to a reputation risk they can only control by being active in both child
projects simultaneously after the fork....Distributing rogue patches (or
worse, rogue binaries) exposes owners to an unfair reputation risk. Even
if the official code is perfect, the owners will catch flak from bugs in
the patches. Surreptitiously filing someone's name off a project is, in
cultural context, one of the ultimate crimes. [Given that it directly attacks
a source of reputation, one's place of merit in a production history]....All
three of these taboo behaviors inflict harm on the open-source community
as well as local harm on the victim(s). Implicitly they damage the entire
community by decreasing each potential contributor's perceived likelihood
that [his or her contributions] will be rewarded.17
3. Transaction Costs
Let me now attempt to link some of these ideas to the concepts
developed within the New Institutionalist approach to economics. This is
hardly an original undertaking as many commentators of the open-source movement
use the concept of "transaction costs" to describe, for example,
the role which the Internet has played as one of reducing "coordination
costs". But rather than merely trying to specify the nature of the
transaction costs involved I would like to discuss two concepts from this
branch of economics which bear directly on the two aspects of the movement
I just discussed: the GPL as an enforcement mechanism for community norms,
an aspect that links it to the concept of "institutional environment",
and the leadership system of open-source projects, an aspect which relates
to the concept of "governance structures". The origin of transaction
cost economics may be traced to the work of Ronald Coase who, among other
things, emphasized the role of legal considerations in economics. When people
exchange goods in a market, not only physical resources change hands but
also rights of ownership, that is, the rights to use a given resource and
to enjoy the benefits that may be derived from it. These legal background
conditions were then "expanded beyond property rights to include contract
laws, norms,
customs, conventions, and the like..."
18 Collectively,
these political, social and legal ground rules forming the basis not only
for exchange, but also for production and distribution, are referred to
as the "institutional environment" of an economy.
The coiner of the term, economist and economic historian Douglass North,
believes that when market exchanges are conceived this way they can be shown
to involve a host of "hidden" costs ranging from the energy and
skill needed to ascertain the quality of a product, to the drawing of sales
and employment contracts, to the enforcement of those contracts. In medieval
markets, he argues, these transaction costs were minimal, and so were their
enforcement characteristics: threats of mutual retaliation, ostracism, codes
of conduct and other informal constraints sufficed to allow for a more or
less smooth functioning of a market. But as the volume and scale of trade
intensified (or as its character changed, as in the case of foreign, long-distance
trade) new institutional norms and organizations were needed to regulate
the flow of resources, ranging from standardized weights and measures, to
the use of notarial records as evidence in merchant law courts. North's
main point is that as medieval markets grew and complexified their transaction
costs increased accordingly, and hence that without a set of institutional
norms and organizations to keep these costs down the intensification
of trade in the West would have come to a halt. Economies of scale in trade
and low-cost enforceability of contracts were, according to North, mutually
stimulating
19.
The other (and better known) New Institutionalist contribution is the idea
that depending on the balance of different transaction costs different governance
structures become appropriate (in terms of their relative efficiency) in
an economy. As early as 1937, Ronald Coase convincingly argued that the
traditional picture of a market, as a system in which without central control
individual traders are collectively coordinated by the price mechanism,
is only valid for a certain combination of transaction costs. For other
combinations, firms (that is, more or less hierarchical institutional organizations)
are a
more efficient mechanism of coordination. As he puts it, "the distinguishing
mark of the firm is the supersession of the price mechanism."
20
Coase goes on to argue that:
The main reason why it is profitable to establish a firm
would seem to be that there is a cost of using the price mechanism. The
most obvious cost of 'organizing' production through the price mechanism
is that of discovering what the relevant prices are....The costs of negotiating
and concluding a separate contract for each exchange transaction which takes
place on a market must also be taken into account....It is true that contracts
are not eliminated when there is a firm, but they are greatly reduced. A
factor of production (or the owner thereof) does not have to make a series
of contracts with the factors with whom he is co-operating within the firm,
as would be necessary, of course, if this co-operation were a direct result
of the working of the price mechanism. For these series of contracts is
substituted one."21
Firms are not, of course, all the same. In particular, they may differ in
size and in the degree to which they possess market power. Coase also dreamt
of giving the question of differences in size a more scientific treatment,
arguing that the more transactions are conducted without the price mechanism,
the larger a firm should get, up to the point where "the costs of organizing
an extra transaction within the firm are equal to the costs involved in
carrying out the transaction in the open market or the costs of organizing
by another entrepreneur."
22
Since Coase first proposed these theses much work has been done in discovering
transaction costs other than those he explicitly dealt with (information-gathering
costs, contracting costs) and the list seems to still be growing. What this
means is that, unlike the simple dichotomy of governance structures (markets
coordinated by prices versus firms coordinated by commands) which results
from including only a few transaction costs, the inclusion of a wider variety
of costs leads to consider a host of hybrid structures between pure markets
and pure hierarchies.
23 (This is, in a sense,
implicit in Coase, given that he does distinguish between an economy ran
by hundreds of small firms, and one ran by a handful of oligopolistic large
corporations.)
In these terms, the contributions of the open-source movement which go beyond
the production of software are: the creation of the GPL contract as a key
component of the movement's institutional background; and the creation of
a unique, hybrid governance structure, exemplified by the development model
behind
Linux. In both cases we are faced with experimental creations,
that is, a license agreement designed to propagate community norms with
very low enforceability costs, and a hybrid of centralized and decentralized
decision-making elements with very low coordination costs. I call them both
"experimental" not only because of their relative novelty, but
also because the savings in transaction costs that they effect have not
been fully tested in reality. I will return to this point in my conclusion,
but before that I need to clarify the definition of markets as governance
structures, given that confusion on this matter seems to prevail within
the hacker community, or at least, in the published thoughts of its philosophers.
Eric Raymond, for example, hesitates between characterizing the
Linux
development model as a "bazaar" (which implies, of course, that
he views it as having the decentralized structure of a market) or as a "gift
culture". Although he does not explicitly acknowledge it, he is basically
attempting to use the well-known classification of Karl
Polanyi who divided the different forms of social integration into the categories
of "exchange" (that is, markets), "redistribution" (governmental
hierarchies) and "gift" (reciprocity).
24
This static classification has been severely criticized by contemporary
economic historians who have realized that any such list of essentialist
categories cannot do justice to the complexity and heterogeneity of economic
reality and history.
25 In Raymond's hands the essentialism
becomes even worse since he argues that these three forms are hardwired
in the human brain
26, a strange claim by someone
who clearly realizes that structures with non-reducible properties may emerge
from interactions guided by local rules. As an antidote to this, let me
quote Herbert Simon on the different conceptions of the market operating
in today's economic thought:
In the literature of modern economics....there is not one
market mechanism; there are two. The ideal market mechanism of general equilibrium
theory is a dazzling piece of machinery that combines the optimizing choices
of a host of substantively rational economic agents into a collective decision
that is Pareto optimal for the society. [That is, results in an allocation
of scarce resources which may not be modified without making someone worse
off.] The pragmatic mechanism described by von Hayek is a much more modest
(and believable) piece of equipment that strives for a measure of procedural
rationality by tailoring decision-making tasks to computational capabilities
and localized information. It makes no promises of optimization."27
The conception of the market prevalent in analyses of the open-source movement
is basically the neo-classical version of Adam Smith's invisible hand (general
equilibrium theory), where economic agents have optimizing rationality and
perfect information about prices. This is clear in Raymond's use of expressions
like "maximizing reputational returns". Simon, on the other hand,
persuasively argues that human beings cannot reach optimal decisions, their
bounded rationality (their limited computational resources) allowing them
at most to reach satisfactory compromises. If decentralized markets are
better than centralized hierarchies it is "because they avoid placing
on a central planning mechanism a burden of calculation that such a mechanism,
however well buttressed by the largest computers, cannot sustain. [Markets]
conserve information and calculation by making it possible to assign decisions
to the actors who are most likely to possess the information (most of it
local in origin) that is relevant to those decisions."
28
Needless to
say, the conception of markets that is used in Transaction Cost economics
is the von Hayek/Simon one, as it is clear from the fact that the first
transaction cost mentioned by Coase is the costs of finding information
about prices. But in addition to limited rationality, limited honesty (or
the costs of opportunism) are also added.
29
Now, when I claim that the governance structure behind the
Linux
project is a hybrid of market and hierarchy, it is the "informational"
definition of markets that I have in mind. Clearly, in the
Linux
project prices do not play the role of transmitters of information (since
no one gets compensated monetarily) but the definition of a "market-like
structure" may be broadened to include other means of transmitting
information. The key is the decentralized use of local information. Analyses
of the dynamics of the project based on interviews with participants seem
to confirm this point. There is, on one hand, a hierarchical component comprised
by Linus Torvalds himself, and a group with a changing composition (including
Alan Cox, Maddog Hall and 6 to 12 others) of his closest associates. This
core group, however, is not formally defined and has no real power to compel
obedience from those outside of it. The members of the core group do play
a key informational job mediating between "Torvalds and the development
community, providing an effective filter to reduce the [informational] load
reaching Torvalds - effective to the very extent that, while Torvalds still
insists that he reviews
every line of code he applies to the kernel, some people think that this
is unnecessary... suggesting the general reliability of the decentralized
development below Torvalds."
30
On the other hand, the power of the hundreds of people that do not belong
to this core group lies precisely in the local information that they can
bring to bear, information which can only be gathered by users of a program
who know what is relevant to them. Like Simon's markets, these users are
a "parallel computer", a vast geographically dispersed army of
programmers working simultaneously (in parallel) finding bugs and, as Raymond
puts it, collectively exploring the space of possible program designs.
31
The possibility of tapping into this reservoir of resources without the
aid of prices to convey information is usually explained by the existence
of the Internet. Torvalds is given the credit for having been the first
to exploit this latent capability but I think it is fair to say that he
stumbled upon, rather than planned, this possibility. Hence my characterization
of the emergent governance structure as an unintended consequence of intentional
action.
To conclude this brief examination of the open-source movement I would like
to emphasize how experimental its non-software contributions are. The GPL
has not, to my knowledge, been tested in court, thus it is a piece of legal
machinery which has demonstrated its power in practice but which may one
day be challenged and show that it did not reduce enforceability costs after
all. An important task for legal experts today is, I believe, to create
imaginary scenarios where this challenge could be mounted and to invent
new license designs which could avoid negative outcomes.
32
The development model, on the other hand, has proved itself worthy of certain
production tasks (such as rapidly evolving a piece of pre-existing software)
but it has yet to show that it can fulfill all the different needs of software
production (including the initiation of brand new types of software). But
even if the movement failed when confronted with any of these challenges,
it would have already proved its worth by showing the potential gains of
creatively experimenting with alternative institutional environments and
governance structures. Even non-programmers have a lesson to learn from
this daring institutional experimentation.
© Manuel DeLanda 2001