Distributed collaboration has become a major topic in economic, organisational and governance literatures, which study social organisations. New information technologies and forms of their usage have led to the rise of previously unknown distributed forms of collaboration.
Distributed collaboration describes close collaboration among geographically and organisationally distributed persons or teams facilitated by modern information and communication technology.
Open source software development and community-based content production are among the most prominent domains of application of this collaborative model.
Distributed collaboration can include classic inter-firm collaboration as well as userfirm networks or open source-like networks of voluntary producers. Distributed collaboration among producers is supplemented by enhanced cooperation among distributed commanders.
“Distributed decision-making” or “collaborative command and control“, applied, e.g., in military and emergency management, reflects the distributed collaboration among leaders of different, distributed, and even independent teams.
Examples of the combination of open source (as a software attribute) with distributed inter-firm collaboration are so called “open source service networks”.
They describe international networks “of firms that collaborate in order to service customer software needs based on open source solutions”.
These networks rely on a number of governance techniques and social mechanisms to coordinate and safeguard the exchange of information: restriction of access to the network, a “macroculture” of shared assumptions and values, collective sanctions against violation of shared norms, and 18 Secrecy vs.
These open source service networks differ from open source projects as their goal is not to produce open source software and they hence do not apply an open-source mode of production.
Other than in conventional inter-firm collaboration, they use social mechanisms like reputation instead of legal means such as contracts. Another broad concept for geographically distributed collaboration has been suggested by a joint research project by the Oxford Internet Institute and McKinsey.
In so-called “distributed collaboration problem solving networks“ or “collaborative network organisations,“ “peers” and “networked individuals” collaborate on “problem solving and co-creation of services and products beyond traditional organizational boundaries and geographical constraints.”
In his “classification framework”, Dutton distinguishes between three types of collaborative network organisations, “1.0 Sharing”, “2.0 Contributing”, and “3.0 Co-Creating”. Each of these types also “links” with four different forms of “management strategies to collaboration“, namely architecture, openness, control, and modularisation.
As an example, Dutton asserts that the need for access control in co-creation networks like Mozilla’s Firefox is greater than in sharing networks such as Bugzilla.
While this stream of publications has made extensive use of the concepts used in this study, they have added only little conceptual clarity and theoretic value to the pre-existing literature described in the subsequent subsections.
A third way to frame distributed collaboration has been brought forward by Adler and Heckscher. Similar to the conceptualisations mentioned above, Adler and Heckscher’s “collaborative communities“ aim to encompass broader social realities than peer production projects and their specific definitional requirements.
In some societal areas, the authors argue, communities have become “the dominant organizing principle”, superseding markets and firms contrasts with “older” forms of community that were either in the shadow of markets or in the shadow of hierarchies.
A collaborative community
is based on values (“participants coordinate their activity through their commitment to common, ultimate goals”), organisation (“social structures that support interdependent process management through formal and informal social.
Their observations on the characteristics of collaborative communities might help explain some of the potential empirical results of this study.
“Neither the traditional nor modern forms of community are adequate for groups that seek high levels of adaptiveness and complex interdependence such situations trust is particularly important, because people depend a great deal on others whose skills and expertise they cannot check….
Collaborative community forms when people work together to create shared value. This increasingly characterizes societies in which the generation of knowledge, often involving many specialists, has become central to economic production”.
Firms are affected, too, by the challenges the creation of knowledge creates for businesses.
“Corporations are finding that to produce the complex forms of knowledge increasingly needed for economic growth — bringing together the expertise of multiple specialists — they need to move beyond the informal links of the paternalist community”.
This need has apparently led to the corporate support of distributed collaboration, peer-governed production networks. The question arises however, as to how these networks are or could be organised in the security domain.
Adler and Heckscher explain why distributed forms of collaboration work despite the fact that the actors within them have never met each other in person, and nor is there a hierarchical intermediary that would guarantee and enforce mutually expected behaviour.
The glue holding individuals together in “collaborative communities” is “swift trust”, a mode of trust that is not based on acquaintance but on assumed common values and goals.
But is “swift trust” sufficient when it comes to security issues? With regard to collaboratively produced Internet security, swift trust might indeed be inappropriate for sharing secretive information and for globally distributed collaborative communities beyond the scope of intra-firm departments or collaboration amongst a few well-acquainted corporations.
This raises the question of how trust is generated in distributed, heterogeneous global security networks, and whether secrecy is an indispensable attribute of security production. The first question will be addressed later in this thesis in section on Trust in the security community; the latter in section
Secrecy vs. openness for collaboration
Its status as a phenomenon worth studying is due not only to the sheer output and results of projects, which are at least partly conducted by volunteers dedicating their spare time to create public goods, but also because it has created new forms of organisation and governance techniques for distributed collaboration web-based collaborative endeavours.
The resulting body of literature should thus pose a fruitful source of ideas and models to analyse distributed, collaborative production in the domain of Internet security.
Many aspects of FOSS relevant to social sciences have been studied. Riehle analysed how open software has changed the behaviour of actors in the software market. Cooper described the sources of economic advantage and how savings in digital production processes can be made on supply-side resources and transaction costs and how demand-side values can be enhanced by applying opensource software and production modes, peer-to-peer and mesh technologies.
Schweik, English, and Haire provide insight into the relevant factors that make open source collaboration projects successful, and how this form of collaboration can be successfully applied to non-software activities.
The quantitative dimension of open source software has been analysed by Deshpande and Riehle. Maxwell argues about the role of openness in open source software and its related property regime for the opensource innovation model.
Riehle considers the use of the open-source model as an opportunity for firms to lower production costs and product prices, which would increase the overall market size for their products.
The consequences for openness of source code is discussed by Hoepman and Jacobs, who conclude that in the long run openness of systems would make them more secure, while in the short run exposure would likely increase.
Given the challenges posed by Internet security risks and the technical and organisational innovations that are necessary to overcome them, the relationship between innovation and the open source method are of high interest.
Open source software as a showcase for “open innovation” was analysed by West and Gallagher. This mode of innovation “systematically encourag and explor a wide range of internal and external sources for innovation opportunities, consciously integrating that exploration with firm capabilities” and transforms the management of intellectual property by firms.
The open innovation model, as it 8 For IBM’s motives to engage with open source software cf. Samuelson 2006. Theoretical Foundations 21 manifests in open source software development, has two key characteristics, namely collaborative development and shared rights to use that technology.
Economic incentives for firms to participate in OSS development is to
a) use open source projects as a means to pool their R&D resources
b) use existing open source projects as a foundation for commercial products and services.
In a brief journal article, Mayer-Schönberger takes a different stance regarding the innovative potential of the open source method.
He argues that disruptive technological innovation would likely be hindered, when a network or community is characterised by many-to-many connections. Dense networks and high connectedness would create “groupthink” and lead to incremental small-step-style innovation instead of what would be necessary to overcome spam and other challenges.
“To enable innovations, especially non-incremental, discontinuous, and radical ones — which are needed, among other things, to launch successfully the next-generation Internet — may require unique policy intervention: reducing the social ties that link its coders.” (2009) In contrast, Wang (2007) observed that the highconnectedness of its coders and contributors increases the likelihood that an open source project is successful.
Among the core issues of research on FOSS are questions about the nature and defining characteristics of the open source method, the sources for its success, and factors of its viability and sustainability.
Osterloh (2004) identified intrinsic and extrinsic motivation and favourable governance mechanisms, which would not hinder the former, as prime factors for the success of open source projects.
A set of motivational, situational, and institutional factors are prerequisites for the functionality of “virtual communities of practice”.
These communities are characterised by an absence of central authorities, and of privatisation of intellectual property rights, by loosely defined group borders and unclear resources (Osterloh, Rota, & Kuster, 2006). As to the motivational factor, Osterloh argues that actors need to be motivated by a mix of intrinsic and extrinsic factors.
As to the situational factor, open source production is more likely to be successful when it is less efficient for independent suppliers to ensure private ownership of intellectual property. As to the organisational factors, volunteering, self-governance, participation and transparency of decision-making are supportive for open source systems.
Finally, with regard to the institutional factors, license arrangements like copyleft and the support of open source community norms by commercial providers foster a successful application of modes of open source production.
David and Shapiro found that contributors to open source projects have a wide range of motivations.
The degree of success of open source projects depends on their ability to The contributions of IBM, HP, and Sun to the Mozilla project served as prime examples.
Openness rise and sustain motivations for actors to start and continue contributing to open source projects (David & Shapiro, 2008).
Factors that would make the Success of Open Source have however most thoroughly been studied by Weber. The remainder of this section is devoted to those factors identified by Weber that make modes of open source production successful.
Open source as “a way of organizing production” (Weber) — as opposed to open source as an attribute of software — is defined by some key characteristics:
Everyone can participate and contribute to these projects, projects are set up and run on a voluntary basis, contributors share what is called the ‘open-source ideology’ and projects are organised around a certain property regime.
Valuable for the analysis of power-saturated security governance settings, Weber has discussed the role and locus of power in different kinds of networks. Differentiating between three network types (open source, platform, subcontracting), Weber sees power in an open-source network residing with those inventing and dominating the ideas and values of the projects.
The ordering principle would be a voluntary, charismatic meritocracy, presumably with Linus Torvalds as the incarnation of that archetype. Apparently, actual implementations of the open source ideal do come with some degrees of internal organizational hierarchy and do not resemble anarchic or power-free venues.
Will companies fight distributed collaboration steps?
Weber argues that the open source mode of production is a specialisation of “distributed innovation“. This form of innovation is based on the four principles of experiment empowerment, mechanisms to identify relevant information, mechanisms to recombine information, and a governance system supporting such an innovation approach.
The distributed collaboration of open source model empowers potential contributors to play with the given, freely accessible resources of an open source project, and recombine ideas and previous results to come up with new innovative ideas.
The absence of central-decision making “in the sense that no one is telling anyone what to do or what not to do … is the essence of distributed innovation“.
The innovation aspect of the source production hence relies on an appropriate structuring of information. Most relevant for this study are Weber’s contributions to a model of open source production viability.
Weber has not developed an empirically tested theory of open source production, but he has formulated a convincing set of assumptions on factors influencing the feasibility of open source adoption in other domains than software development.
He stresses that much of his thinking about the effectiveness Theoretical Foundations 23 and viability of the open source process in general are “expansive hypotheses”. But “they are broadly indicative of the kinds of conditions that matter”.
Based on and extending his empirical analysis, Weber has distilled a list of factors that likely allow “the open source process … to work effectively” . These factors are crucial for the viability of the open source production process in a particular domain of knowledge and for a particular good.
The factors can be divided into attributes of tasks and production processes, and attributes of agents involved. Regarding the tasks and the general production process, effectiveness is increased by the following six factors:
- First, the knowledge necessary for the production process is “not proprietary or locked-up” and hence allows for “disaggregated contributions”.
- Second, the problem to be addressed is of a certain degree of general importance, e.g., a certain threshold in the number of beneficiaries is exceeded.
- Third, quality of production is ensured by mechanisms such as “widespread peer attention and review”, which manages to bring the number of errors introduced below the number of errors corrected.
- Fourth, “strong positive network effects”, partly supported by the aforementioned criteria, increase the effectiveness of open source as process.
- Fifth, given the voluntariness, open source processes tend to be more effective if they can be initiated by “an individual or a small group [that] can take the lead”.
Last but not least, the production process is framed and supported by a “voluntary community”.
Regarding the agents involved in open source production and the effect of their attributes on the effectiveness of open source production, actors need to be able judge the viability of an open source project.
By contributing to an open source project, agents make a bet that joint efforts will result in a “joint good”.
In addition, agents will expect payoffs for their contributions by “gain[ing] personally valuable knowledge” of positive normative and ethical valence).
Not only individuals can decide to participate in distributed collaboration endeavors, but also organizations.
The feasibility of distributed innovation is, according to Weber, increased by three further factors: the problem resides within an organisation; more actors are likely to face the same or a similar problem; and the solution to the problem does not represent a competitive advantage over competing actors.
Weber builds his arguments partly on transaction costs economics, but he stresses that factors other than efficiency are of importance, too.
He doubts that transaction costs are the main driver for organisations which are considering the option of open sourcing some of their activities. “Because no one can measure these activities in advance (and often not even after the fact), the decision becomes an experiment in organisational innovation.“
Openness an organization’s decision its door
to open source certain activities would not be sustainable if it would not result in superior innovations, higher efficiency, and lower transaction costs in the long run. Weber correctly identifies “two factors that transaction cost theory does not emphasize”, including “the relationship between problem-solving innovation and the location of tacit information” and “the importance of principles and values, in contrast to efficiency”.
Therefore, an organisation’s or, more neutrally, an actor’s decision to go open source is not merely an “efficiency choice around distributed innovation”.
Much depends on the structuring and organization of knowledge in that domain.
The distribution of tacit knowledge across several independent actors, the necessity of such knowledge for problem-solving, the culture of both sharing knowledge and relying on that shared knowledge are different in some societal domains such as medicine than in others, like software.
Sharing and using knowledge reflects shared principles and values among collaborators at least regarding how knowledge should be distributed collaboration in a particular domain or community.
Weber discusses the idea of distributed problem solving among physicians and genomic researchers. He highlights how a barely developed sharing culture in the communities of both physicians and genomics scientists has stymied distributed collaboration in these two branches of the medical domain.
Another stumbling block for the open source method in genomics is governmental regulation.
This is despite the “structure of medical knowledge”, which is widely distributed among countless experts. A superficial glance at traditional security governance practices suffices to hypothesize that the domain of security is packed with regulatory and cultural hurdles for the open source method.
On the other hand the structure of security-relevant knowledge is widely distributed collaboration and the “key issue [of] organization of knowledge sharing” has been solved in favour of the open source model in the information technology domain.
To sum up, a wide range of factors act as determinants of an institutional design. Whether the list of these factors is sufficient, the overall relevance of a respective factor on the viability of open source production is not clear.
Neither is it apparent which institutional design will evolve or actors will chose when a few of the aforementioned factors are detrimental for the viability of the open source method in a particular context, while the majority of them support it.
It is not clear which factor is nice-to-have and which ones are sine qua non. Until then, the voluntary application of the open source method as an organising principle in other domains than software is “experimenting” and “requires learning by doing”. Weber’s decade-old question is still valid: “What happens at the interface, between networks and hierarchies, where they meet?”