Bradley M. Kuhn's Blog



  • 2015-11-26: Do You Like What I Do For a Living?

    [ A version of this blog post was crossposted on Conservancy's blog. ]

    I'm quite delighted with my career choice. As an undergraduate and even in graduate school, I still expected my career extend my earlier careers in the software industry: a mixture of software developer and sysadmin. I'd probably be a DevOps person now, had I stuck with that career path.

    Instead, I picked the charity route: which (not financially, but work-satisfaction-wise) is like winning a lottery. There are very few charities related to software freedom, and frankly, if (like me) you believe in universal software freedom and reject proprietary software entirely, there are two charities for you: the Free Software Foundation, where I used to work, and Software Freedom Conservancy, where I work now.

    But software freedom is not merely an ideology for me. I believe the ideology matters because I see the lives of developers and users are better when they have software freedom. I first got a taste of this IRL when I attended the earliest Perl conferences in the late 1990s. My friend James and I stayed in dive motels and even slept in a rental car one night to be able to attend. There was excitement in the Perl community (my first Free Software community). I was exhilarated to meet in person the people I'd seen only as god-like hackers posting on perl5-porters. James was so excited he asked me to take a picture of him jumping as high as he could with his fist in the air in front of the main conference banner. At the time, I complained; I was mortified and felt like a tourist taking that picture. But looking back, I remember that James and I felt that same excitement and just were expressing it differently.

    I channeled that thrill into finding a way that my day job would focus on software freedom. As an activist since my teenage years, I concentrated specifically on how I could preserve, protect and promote this valuable culture and ideology in a manner that would assure the rights of developers and users to improve and share the software they write and use.

    I've enjoyed the work; I attend more great conferences than I ever imagined I would, where now people occasionally walk up to me with the same kind of fanboy reverence that I reserved for Larry Wall, RMS and the heroes of my Free Software generation. I like my work. I've been careful, however, to avoid a sense of entitlement. Since I read it in 1991, I have never forgotten RMS' point in the GNU Manifesto: Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else., a point he continues in his regular speeches, by adding: I [could] just … give up those principles and start … writing proprietary software. I looked for another alternative, and there was an obvious one. I could leave the software field and do something else. Now I had no other special noteworthy skills, but I'm sure I could have become a waiter. Not at a fancy restaurant; they wouldn’t hire me; but I could be a waiter somewhere. And many programmers, they say to me, “the people who hire programmers demand [that I write proprietary software] and if I don’t do [it], I’ll starve”. It’s literally the word they use. Well, as a waiter, you’re not going to starve.

    RMS' point is not merely to expose the false dilemma inherent in I have to program, even it's proprietary, because that's what companies pay me to do, but also to expose the sense of entitlement in assuming a fundamental right to do the work you want. This applies not just to software authorship (the work I originally trained for) but also the political activism and non-profit organizational work that I do now.

    I've spent most of my career at charities because I believe deeply that I should take actions that advance the public good, and because I have a strategic vision for the best methods to advance software freedom. My strategic goals to advance software freedom include two basic tenets: (a) provide structure for Free Software projects in a charitable home (so that developers can focus on writing software, not administration, and so that the projects aren't unduly influenced by for-profit corporations) and (b) uphold and defend Free Software licensing, such as copyleft, to ensure software freedom.

    I don't, however, arrogantly believe that these two priorities are inherently right. Strategic plans work toward a larger goal, and pursing success of a larger ideological mission requires open-mindedness regarding strategies. Nevertheless, any strategy, once decided, requires zealous pursuit. It's with this mindset that I teamed up with my colleague, Karen Sandler, to form Software Freedom Conservancy.

    Conservancy, like most tiny charities, survives on the determination of its small management staff. Karen Sandler, Conservancy's Executive Director, and I have a unique professional collaboration. She and I share a commitment to promoting and defending moral principles in the context of software freedom, along with an unrelenting work ethic to match. I believe fundamentally that she and I have the skills, ability, and commitment to meet these two key strategic goals for software freedom.

    Yet, I don't think we're entitled to do this work. And, herein there's another great feature of a charity. A charity not only serves the public good; the USA IRS also requires that a charity be funded primarily by donations from the public.

    I like this feature for various reasons. Particularly, in the context of the fundraiser that Conservancy announced this week, I think about it terms of seeking a mandate from the public. As Conservancy poises to begin its tenth year, Karen and I as its leaders stand at a crossroads. For financial reasons of the organization's budget, we've been thrust to test this question: Does the public of Free Software users and developers actually want the work that we do?.

    While I'm nervous that perhaps the answer is no, I'm nevertheless not afraid to ask the question. So, we've asked. We asked all of you to show us that you want our work to continue. We set two levels, matching the two strategic goals I mentioned. (The second is harder and more expensive to do than the first, so we've asked many more of you to support us if you want it.)

    It's become difficult in recent years to launch a non-profit fundraiser (which have existed for generations) and not think of the relatively recent advent of gofundme, Kickstarter, and the like. These new systems provide a (sadly, usually proprietary software) platform for people to ask the public: Is my business idea and/or personal goal worth your money?. While I'm dubious about those sites, I do believe in democracy enough to build my career on a structure that requires an election (of sorts). Karen and I don't need you to go to the polls and cast your ballot, but we do ask you consider if what we do for a living at Conservancy is worth US$10 per month to you. If it is, I hope you'll “cast a vote” for Conservancy and become a Conservancy supporter now.

    Posted on Thursday 26 November 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2015-09-28: How Would Software Freedom Have Helped With VW?

    [ A version of this blog post was crossposted on Conservancy's blog. ]

    Would software-related scandals, such as Volkswagen's use of proprietary software to lie to emissions inspectors, cease if software freedom were universal? Likely so, as I wrote last week. In a world where regulations mandate distribution of source code for all the software in all devices, and where no one ever cheats on that rule, VW would need means other than software to hide their treachery.

    Universal software freedom is my lifelong goal, but I realized years ago that I won't live to see it. I suspect that generations of software users will need to repeatedly rediscover and face the harms of proprietary software before a groundswell of support demands universal software freedom. In the meantime, our community has invented semi-permanent strategies, such as copyleft, to maximize software freedom for users in our current mixed proprietary and Free Software world.

    In the world we live in today, software freedom can impact the VW situation only if a few complex conditions are met. Let's consider the necessary hypothetical series of events, in today's real world, that would have been necessary for Open Source and Free Software to have stopped VW immediately.

    First, VW would have created a combined or derivative work of software with a copylefted program. While many cars today contain Linux, which is copylefted, I am not aware of any cars that use Linux outside of the on-board entertainment and climate control systems. The VW software was not part of those systems, and VW engineers almost surely wrote the emissions testing mode code from scratch. Even if they included some non-copylefted Open Source or Free Software in it, those licenses don't require disclosure of any source code; VW's ability to conceal its bad actions with non-copylefted code is roughly identical to the situation of proprietary VW code before us. As a thought experiment, though, let's pretend, that VW based the nefarious code on Linux by writing a proprietary Linux module to trick the emissions testing systems.

    In that case, VW would have violated the GPL. But that alone is far from enough to ensure anyone would catch VW. Indeed, GPL violations remain very prevalent, and only one organization enforces the GPL for Linux (full disclosure: that's Software Freedom Conservancy, where I work). That organization has such limited enforcement resources (only three people on staff, and enforcement is one of many of our programs), I suspect that years would pass before Conservancy had the resources to pursue the violation; Conservancy currently has hundreds of Linux GPL violations queued for action. Even once opened, most GPL violations take years to resolve. As an example, we are currently enforcing the GPL against one auto manufacturer who has Linux in their car. We've already spent hundreds of hours and the company to date continues to fail in their GPL compliance efforts. Admittedly, it's highly unlikely that particular violator has a GPL-violating Linux module specifically designed to circumvent automotive regulations. However, after enforcing the GPL in that case for more than two years, I still don't have enough data about their use of Linux to even know which proprietary Linux modules are present — let alone whether those modules are nefarious in any way other than as violating Linux's license.

    Thus, in today's world, a “software freedom solution” to prevent the VW scandal must meet unbelievable preconditions: (a) VW would have to base all its software on copylefted Open Source and Free Software, and (b) an organization with a mission to enforce copyleft for the public good would require the resources to find the majority of GPL violators and ensure compliance in a timely fashion. This thought experiment quickly shows how much more work remains to advance and defend software freedom. While requirements of source code disclosure, such as those in copyleft licenses, are necessary to assure the benefits of software freedom, they cannot operate unless someone exercises the offers for source and looks at the details.

    We live in a world where most of the population accepts proprietary software as legitimate. Even major trade associations, such as the OpenStack Foundation and the Linux Foundation, in the Open Source community laud companies who make proprietary software, as long as they adopt and occasionally contribute to some Free Software too. Currently, it feels like software freedom is winning, because the overwhelming majority in the software industry believe Open Source and Free Software is useful and superior in some circumstances. Furthermore, while I appreciate the aspirational ideal of voluntary Open Source, I find in my work that so many companies, just as VW did, will cheat against important social good policies unless someone watches and regulates. Mere adoption of Open Source won't work alone; we only yield the valuable results of software freedom if software is copylefted and someone upholds that copyleft.

    Indeed, just as it has been since the 1980s, very few people believe that software freedom is of fundamental importance for all software users. Scandals, like VW's use of proprietary software to hide other bad acts, might slowly change opinions, but one scandal is rarely enough to permanently change public opinion. I therefore encourage those who support software freedom to take this incident as inspiration for a stronger stance, and to prepare yourselves for the long haul of software freedom advocacy.

    Posted on Monday 28 September 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2015-09-22: The EPA Deserves Software Freedom, Too

    The issue of software freedom is, not surprisingly, not mentioned in the mainstream coverage of Volkswagen's recent use of proprietary software to circumvent important regulations that exist for the public good. Given that Volkswagen is an upstream contributor to Linux, it's highly likely that Volkswagen vehicles have Linux in them.

    Thus, we have a wonderful example of how much we sacrifice at the altar of “Linux adoption”. While I'm glad for some Free Software to appear in products rather than none, I also believe that, too often, our community happily accepts the idea that we should gratefully laud a company includes a bit of Free Software in their product, and gives a little code back, even if most of what they do is proprietary software.

    In this example, a company poisoned people and our environment with out-of-compliance greenhouse gas emissions, and hid their tracks behind proprietary software. IIUC, the EPA had to do use an (almost literal) analog hole to catch these scoundrels.

    It's not that I'm going to argue that end users should modify the software that verifies emissions standards. But if end users could extract these binaries from the physical device, recompile the source, and verify the binaries match, someone would have discovered this problem immediately when the models drove off the lot.

    So, why does no one demand for this? To me, this feels like Diebold and voting machines all over again. So tell me, voters' rights advocates who claimed proprietary software was fine, as long as you could get voter-verified paper records: how do are we going to “paper verify” our emissions testing?

    Software freedom is the only solution to problems that proprietary software creates. Sadly, opposition to software freedom is so strong, nearly everyone will desperately try every other (failing) solution first.

    Posted on Tuesday 22 September 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2015-09-15: Exercising Software Freedom in the Global Email System

    [ This post was cross-posted on Conservancy's blog. ]

    In this post, I discuss one example of how a choice for software freedom can cause many strange problems that others will dismiss. My goal here is to explain in gory detail how proprietary software biases in the computing world continue to grow, notwithstanding Open Source ballyhoo.

    Two decades ago, nearly every company, organization, entity, and tech-minded individual ran their own email server. Generally speaking, even back then, nearly all the software for both MTAs and MUAs were Free Software0. MTA's are the mail transport agents — the complex software that moves email around from one Internet domain to another. MUAs are the mail user agents, sometimes called mail clients — the local programs with which users manipulate their own email.

    I've run my own MTA since around 1993: initially with sendmail, then with exim for a while, and with Postfix since 1999 or so. Also, everywhere I've worked throughout my entire career since 1995, I've either been in charge of — or been the manager of the person in charge of — the MTA installation for the organization where I worked. In all cases, that MTA has always been Free Software, of course.

    However, the world of email has changed drastically during that period. The most notable change in the email world is the influx of massive amounts of spam, which has been used as an excuse to implement another disturbing change. Slowly but surely, email service — both the MTA and the MUA — have been outsourced for most organizations. Specifically, either (a) organizations run proprietary software on their own computers to deal with email and/or (b) people pay a third-party to run proprietary and/or trade-secret software on their behalf to handle the email services. Email, generally speaking, isn't handled by Free Software all that much anymore.

    This situation became acutely apparent to me this earlier this month when Conservancy moved its email server. I had plenty of warning that the move was needed1, and I'd set up a test site on the new server. We sent and received some of our email for months (mostly mailing list traffic) using that server configured with a different domain ( When the shut-off day came, I moved's email officially. All looked good: I had a current Debian, with a new version of Postfix and Dovecot on a speedier host, and with better spam protection settings in Postfix and better spam filtering with a newer version of SpamAssassin. All was going great, thanks to all those great Free Software projects — until the proprietary software vendors threw a spanner in our works.

    For reasons that we'll never determine for sure2, the IPv4 number that our new hosting provide gave us was already listed on many spam blacklists. I won't debate the validity of various blacklists here, but the fact is, for nearly every public-facing, pure-blacklist-only service, delisting is straightforward, takes about 24 hours, and requires at most answering some basic questions about your domain name and answering a captcha-like challenge. These services, even though some are quite dubious, are not the center of my complaint.

    The real peril comes from third-party email hosting companies. These companies have arbitrary, non-public blacklisting rules. More importantly, they are not merely blacklist maintainers, they are MTA (and in some cases, even MUA) providers who sell their proprietary and/or trade-secret hosted solutions as a package to customers. Years ago, the idea of giving up that much control of what happens to your own email would be considered unbelievable. Today, it's commonplace.

    And herein lies the fact that is obvious to most software freedom advocates but indiscernible by most email users. As a Free Software user, with your own MTA on your own machine, your software only functions if everyone else respects your right to run that software yourself. Furthermore, if the people you want to email are fully removed from their hosting service, they won't realize nor understand that their hosting site might block your emails. These companies have their customers fully manipulated to oppose your software freedom. In other words, you can't appeal to those customers (the people you want to email), because you're likely the only person to ever raise this issue with them (i.e., unless they know you very well, they'll assume you're crazy). You're left begging to the provider, whom you have no business relationship with, to convince them that their customers want to hear from you. Your voice rings out indecipherable from the spammers who want the same permission to attack their customers.

    The upshot for Conservancy? For days, Microsoft told all its customers that Conservancy is a spammer; Microsoft did it so subtly that the customers wouldn't even believe it if we told them. Specifically, every time I or one of my Conservancy colleagues emailed organizations using Microsoft's “Exchange Online”, “Office 365” or similar products to host email for their domain4, we got the following response:

                    Sep  2 23:26:26 pine postfix/smtp[31888]: 27CD6E12B: to=,[]:25, delay=5.6, delays=0.43/0/0.16/5, dsn=5.7.1, status=bounced (host[] said: 550 5.7.1 Service unavailable; Client host [] blocked using FBLW15; To request removal from this list please forward this message to (in reply to RCPT TO command))

    Oh, you ask, did you forward your message to the specified address? Of course I did; right away! I got back an email that said:

    Hello ,

    Thank you for your delisting request SRXNUMBERSID. Your ticket was received on (Sep 01 2015 06:13 PM UTC) and will be responded to within 24 hours.

    Once we passed the 24 hour mark with no response, I started looking around for more information. I also saw a suggestion online that calling is the only way to escalate one of those tickets, so I phoned 800-865-9408 and gave V-2JECOD my ticket number and she told that I could only raise these issues with the “Mail Flow Team”. She put me on hold for them, and told me that I was number 2 in the queue for them so it should be a few minutes. I waited on hold for just under six hours. I finally reached a helpful representative, who said the ticket was the lowest level of escalation available (he hinted that it would take weeks to resolve at that level, which is consistent with other comments about this problem I've seen online). The fellow on the phone agreed to escalate it to the highest priority available, and said within four hours, Conservancy should be delisted. Thus, ultimately, I did resolve these issues after about 72 hours. But, I'd spent about 15 hours all-told researching various blacklists, email hosting companies, and their procedures3, and that was after I'd already carefully configured our MTA and DNS to be very RFC-compliant (which is complicated and confusing, but absolutely essential to stay off these blacklists once you're off).

    Admittedly, this sounds like a standard Kafkaesque experience with a large company that almost everyone in post-modern society has experienced. However, it's different in one key way: I had to convince Microsoft to allow me to communicate with their customers who are paying Microsoft for proprietary and/or trade-secret software and services, ostensibly to improve efficiency of their communications. Plus, since Microsoft, by the nature of their so-called spam blocking, doesn't inform their customers whom they've blocked, I and my colleagues would have just sounded crazy if we'd asked our contacts to call their provider instead. (I actually considered this, and realized that we might negatively impact relationships with professional contacts.)

    These problems do reduce email software freedom by network effects. Most people rely on third-party proprietary email software from Google, Microsoft, Barracuda, or others. Therefore, most people, don't exercise any software freedom regarding email services. Since exercising software freedom for email slowly becomes a rarer and rarer (rather than norm it once was), society slowly but surely pegs those who do exercise software freedom as “random crazy people”.

    There are a few companies who are seeking to do email hosting in a way that respects your software freedom. The real test of such companies is if someone technically minded can get the same software configured on their own systems, and have it work the same way. Yet, in most cases, you go to one of these companies' Github pages and find a bunch of stuff pushed public, but limited information on how to configure it so that it functions the same way the hosted service does. RMS wrote years ago that Free Software cannot properly succeed without Free Documentation, and in many of these hosting cases: the hosting company is using fully upstreamed Free Software, but has configured the software in a way that is difficult to stumble upon by oneself. (For that reason, I'm committing to writing up tutorials on how Conservancy configured our mail server, so at least I'll be part of the solution instead of part of the problem.)

    BTW, as I dealt with all this, I couldn't help but think of John Gilmore's activism efforts regarding open mail relays. While I don't agree with all of John's positions on this, his fundamental position is right: we must oppose companies who think they know better how we should configure our email servers (or on which IP numbers we should run those servers). I'd add a corollary that there's a serious threat to software freedom, at least with regard to email software, if we continue to allow such top-down control of the once beautifully decentralized email system.

    The future of software freedom depends on issues like this. Imagine someone who has just learned that they can run their own email server, or bought some Free Software-based plug computing system that purports to be a “home cloud” service with email. There's virtually no chance that such users would bother to figure all this out. They'd see their email blocked, declare the “home cloud” solution useless, and would just get a,, or some other third-party email account. Thus, I predict that software freedom that we once had, for our MTAs and MUAs, will eventually evaporate for everyone except those tiny few who invest the time to understand these complexities and fight the for-profit corporate power that curtails software freedom. Furthermore, that struggle becomes Sisyphean as our numbers dwindle.

    Email is the oldest software-centric communication system on the planet. The global email system serves as a canary in the coalmine regarding software freedom and network service freedom issues. Frighteningly, software now controls most of the global communications systems. How long will it be before mobile network providers refuse to terminate PSTN calls or SMS's sent from devices running modified Android firmwares like Replicant? Perhaps those providers, like large email providers, will argue that preventing robocalls (the telephone equivalent of SPAM) necessitates such blocking. Such network effects place so many dystopias on software freedom's horizon.

    I don't deny that every day, there is more Free Software existing in the world than has ever existed before — the P.T. Barnum's of Open Source have that part right. The part they leave out is that, each day, their corporate backers make it a little more difficult to complete mundane tasks using only Free Software. Open Source wins the battle while software freedom loses the war.

    0Yes, I'm intimately aware that Elm's license was non-free, and that the software freedom of PINE's license was in question. That's slightly relevant here but mostly orthogonal to this point, because Free Software MUAs were still very common then, and there were (ultimately successful) projects to actively rewrite the ones whose software freedom was in question

    1For the last five years, one of Conservancy's Director Emeriti, Loïc Dachary, has donated an extensive amount of personal time and in-kind donations by providing Cloud server for Conservancy to host its three key servers, including the email server. The burden of maintaining this for us became too time consuming (very reasonably), and Loïc's asked us to find another provider. I want, BTW, to thank Loïc his for years of volunteer work maintaining infrastructure for us; he provided this service for much longer than we could have hoped! Loïc also gave us plenty of warning that we'd need to move. None of these problems are his fault in the least!

    2The obvious supposition is that, because IPv4 numbers are so scarce, this particular IP number was likely used previously by a spammer who was shut down.

    3I of course didn't count the time time on phone hold, as I was able to do other work while waiting, but less efficiently because the hold music was very distracting.

    4If you want to see if someone's domain is a Microsoft customer, see if the MX record for their domain (say, points to

    Posted on Tuesday 15 September 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2015-07-15: Thoughts on Canonical, Ltd.'s Updated Ubuntu IP Policy

    Most of you by now have probably seen Conservancy's and FSF's statements regarding the today's update to Canonical, Ltd.'s Ubuntu IP Policy. I have a few personal comments, speaking only for myself, that I want to add that don't appear in the FSF's nor Conservancy's analysis. (I wrote nearly all of Conservancy's analysis and did some editing on FSF's analysis, but the statements here I add are my personal opinions and don't necessarily reflect the views of the FSF nor Conservancy, notwithstanding that I have affiliations with both orgs.)

    First of all, I think it's important to note the timeline: it took two years of work by two charities to get this change done. The scary thing is that compared to their peers who have also violated the GPL, Canonical, Ltd. acted rather quickly. As Conservancy pointed out regarding the VMware lawsuit, it's not uncommon for these negotiations to take even four years before we all give up and have to file a lawsuit. So, Canonical, Ltd. resolved the matter at least twice as fast as VMware, and they deserve some credit for that — even if other GPL violators have set the bar quite low.

    Second, I have to express my sympathy for the positions on this matter taken by Matthew Garrett and Jonathan Riddell. Their positions show clearly that, while the GPL violation is now fully resolved, the community is very concerned about what the happens regarding non-copylefted software in Ubuntu, and thus Ubuntu as a whole.

    Realize, though, that these trump clauses are widely used throughout the software industry. For example, electronics manufacturers who ship an Android/Linux system with standard, disgustingly worded, forbid-everything EULA usually include a trump clause not unlike Ubuntu's. In such systems, usually, the only copylefted program is the kernel named Linux. The rest of the distribution includes tons of (now proprietarized) non-copylefted code from Android (as well as a bunch of born-proprietary applications too). The trump clause assures the software freedom rights for that one copylefted work present, but all the non-copylefted ones are subject to the strict EULA (which often includes “no reverse engineer clauses”, etc.). That means if the electronics company did change the Android Java code in some way, you can't even legally reverse engineer it — even though it was Apache-licensed by upstream.

    Trump clauses are thus less than ideal because they achieve compliance only by allowing a copyleft to prevail when the overarching license contradicts specific requirements, permissions, or rights under copyleft. That's acceptable because copyleft licenses have many important clauses that assure and uphold software freedom. By contrast, most non-copyleft licenses have very few requirements, and thus they lack adequate terms to triumph over any anti-software-freedom terms of the overarching license. For example, if I take a 100% ISC-licensed program and build a binary from it, nothing in the ISC license prohibits me from imposing this license on you: “you may not redistribute this binary commercially”. Thus, even if I also say to you: “but also, if the ISC license grants rights, my aforementioned license does not modify or reduce those rights”, nothing has changed for you. You still have a binary that you can't distribute commercially, and there was no text in the ISC license to force the trump clause to save you.

    Therefore, this whole situation is a simple and clear argument for why copyleft matters. Copyleft can and does (when someone like me actually enforces it) prevent such situations. But copyleft is not infinitely expansive. Nearly every full operating system distribution available includes an aggregated mix of copylefted, non-copyleft, and often fully-proprietary userspace applications. Nearly every company that distributes them wraps the whole thing with some agreement that restricts some rights that copyleft defends, and then adds a trump clause that gives an exception just for FLOSS license compliance. Sadly, I have yet to see a company trailblaze adoption of a “software freedom preservation” clause that guarantees copyleft-like compliance for non-copylefted programs and packages. Thus, the problem with Ubuntu is just a particularly bad example of what has become a standard industry practice by nearly every “open source” company.

    How badly these practices impact software freedom depends on the strictness and detailed terms of the overarching license (and not the contents of the trump clause itself; they are generally isomorphic0). The task of analyzing and rating “relative badness” of each overarching licensing document is monumental; there are probably thousands of different ones in use today. Matthew Garrett points out why Canonical, Ltd.'s is particularly bad, but that doesn't mean there aren't worse (and better) situations of a similar ilk. Perhaps our next best move is to use copyleft licenses more often, so that the trump clauses actually do more.

    In other words, as long as there is non-copylefted software aggregated in a given distribution of an otherwise Free Software system, companies will seek to put non-Free terms on top of the non-copylefted parts, To my knowledge, every distribution-shipping company (except for extremely rare, Free-Software-focused companies like ThinkPenguin) place some kind of restrictions in their business terms for their enterprise distribution products. Everyone seems to be asking me today to build the “worst to almost-benign” ranking of these terms, but I've resisted the urge to try. I think the safe bet is to assume that if you're looking at one of these trump clauses, there is some sort of software-freedom-unfriendly restriction floating around in the broader agreement, and you should thus just avoid that product entirely. Or, if you really want to use it, fork it from source and relicense the non-copylefted stuff under copyleft licenses (which is permitted by nearly all non-copyleft licenses), to prevent future downstream actors from adding more restrictive terms. I'd even suggest this as a potential solution to the current Ubuntu problem (or, better yet, just go back upstream to Debian and do the same :).

    Finally, IMO the biggest problem with these “overarching licenses with a trump clause” is their use by companies who herald “open source” friendliness. I suspect the community ire comes from a sense of betrayal. Yet, I feel only my usual anger at proprietary software here; I don't feel betrayed. Rather, this is just another situation that proves that saying you are an “open source company” isn't enough; only the company's actions and “fine print” terms matter. Now that open source has really succeeded at coopting software freedom, enormous effort is now required to ascertain if any company respects your software freedom. We must ignore the ballyhoo of “community managers” and look closely at the real story.

    0Despite Canonical, Ltd.'s use of a trump clause, I don't think these various trump clauses are canonically isomorphic. There is no natural mapping between these various trump clauses, but they all do have the same effect: they assure that when the overarching terms conflict with the a FLOSS license, the FLOSS license triumphs over the overarching terms, no matter what they are. However, the potential relevance of the phrase “canonical isomorphism” here is yet another example why it's confusing and insidious that Canonical, Ltd. insisted so strongly on using canonical in a non-canonical way.

    Posted on Wednesday 15 July 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2015-07-04: Did You Actually Read the Lower Court's Decision?

    I'm seeing plenty of people, including some non-profit organizations along with the usual punditocracy, opining on the USA Supreme Court's denial for a writ of certiorari in the Oracle v. Google copyright infringement case. And, it's not that I expect everyone in the world to read my blog, but I'm amazed that people who should know better haven't bothered to even read the lower Court's decision, which is de-facto upheld upon denial by the Supreme Court to hear the appeal.

    I wrote at great length about why the decision isn't actually a decision about whether APIs are copyrightable, and that the decision actually gives us some good clarity with regard to the issue of combined work distribution (i.e., when you distribute your own works with the copyrighted material of others combined into a single program). The basic summary of the blog post I linked to above is simply: The lower Court seemed genially confused about whether Google copy-and-pasted code, as the original trial seems to have inappropriately conflated API reimplemenation with code cut-and-paste.

    No one else has addressed this nuance of the lower Court's decision in the year since the decision came down, and I suspect that's because in our TL;DR 24-hour-news cycle, it's much easier for the pundits and organizations tangentially involved with this issue to get a bunch of press over giving confusing information.

    So, I'm mainly making this blog post to encourage people to go back and read the decision and my blog post about it. I'd be delighted to debate people if they think I misread the decision, but I won't debate you unless you assure me you read the lower Court's decision in its entirety. I think that leaves virtually no one who will. :-/

    Posted on Saturday 04 July 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2015-06-26: John Oliver Falls For Software Patent Trade Association Messaging

    I've been otherwise impressed with John Oliver and his ability on Last Week Tonight to find key issues that don't have enough attention and give reasonably good information about them in an entertaining way — I even lauded Oliver's discussion of non-profit organizational corruption last year. I suppose that's why I'm particularly sad (as I caught up last weekend on an old episode) to find that John Oliver basically fell for the large patent holders' pro-software-patent rhetoric on so-called “software patents”.

    In short, Oliver mimics the trade association and for-profit software industry rhetoric of software patent reform rather than abolition — because trolls are the only problem. I hope the worlds' largest software patent holders send Oliver's writing staff a nice gift basket, as such might be the only thing that would signal to them that they fell into this PR trap. Although, it's admittedly slightly unfair to blame Oliver and his writers; the situation is subtle.

    Indeed, someone not particularly versed in the situation can easily fall for this manipulation. It's just so easy to criticize non-practicing entities. Plus, the idea that the sole inventor might get funded on Shark Tank has a certain appeal, and fits a USAmerican sensibility of personal capitalistic success. Thus, the first-order conclusion is often, as Oliver's piece concludes, maybe if we got rid of trolls, things wouldn't be so bad.

    And then there's also the focus on the patent quality issue; it's easy to convince the public that higher quality patents will make it ok to restrict software sharing and improvement with patents. It's great rhetoric for a pro-patent entities to generate outrage among the technology-using public by pointing to, say, an example of a patent that reads on every Android application and telling a few jokes about patent quality. In fact, at nearly every FLOSS conference I've gone to in the last year, OIN has sponsored a speaker to talk about that very issue. The jokes at such talks aren't as good as John Oliver's, but they still get laughs and technologists upset about patent quality and trolls — but through carefully cultural engineering, not about software patents themselves.

    In fact, I don't think I've seen a for-profit industry and its trade associations do so well at public outrage distraction since the “tort reform” battles of the 1980s and 1990s, which were produced in part by George H. W. Bush's beloved M.C. Rove himself. I really encourage those who want to understand of how the anti-troll messaging manipulation works to study how and why the tort reform issue played out the way it did. (As I mentioned on the Free as in Freedom audcast, Episode 0x13, the documentary film Hot Coffee is a good resource for that.)

    I've literally been laughed at publicly by OIN representatives when I point out that IBM, Microsoft, and other practicing entities do software patent shake-downs, too — just like the trolls. They're part of a well-trained and well-funded (by trade associations and companies) PR machine out there in our community to convince us that trolls and so-called “poor patent quality” are the only problems. Yet, nary a year has gone in my adult life where I don't see a some incident where a so-called legitimate, non-obvious software patent causes serious trouble for a Free Software project. From RSA, to the codec patents, to Microsoft FAT patent shakedowns, to IBM's shakedown of the Hercules open source project, to exfat — and that's just a few choice examples from the public tip of the practicing entity shakedown iceberg. IMO, the practicing entities are just trolls with more expensive suits and proprietary software licenses for sale. We should politically oppose the companies and trade associations that bolster them — and call for an end to software patents.

    Posted on Friday 26 June 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2015-06-15: Why Greet Apple's Swift 2.0 With Open Arms?

    Apple announced last week that its Swift programming language — a currently fully proprietary software successor to Objective C — will probably be partially released under an OSI-approved license eventually. Apple explicitly stated though that such released software will not be copylefted. (Apple's pathological hatred of copyleft is reasonably well documented.) Apple's announcement remained completely silent on patents, and we should expect the chosen non-copyleft license will not contain a patent grant. (I've explained at great length in the past why software patents are a particularly dangerous threat to programming language infrastructure.)

    Apple's dogged pursuit for non-copyleft replacements for copylefted software is far from new. For example, Apple has worked to create replacements for Samba so they need not ship Samba in OSX. But, their anti-copyleft witch hunt goes back much further. It began when Richard Stallman himself famously led the world's first GPL enforcement effort against NeXT, and Objective-C was liberated. For a time, NeXT and Apple worked upstream with GCC to make Objective-C better for the community. But, that whole time, Apple was carefully plotting its escape from the copyleft world. Fortuitously, Apple eventually discovered a technically brilliant (but sadly non-copylefted) research programming language and compiler system called LLVM. Since then, Apple has sunk millions of dollars into making LLVM better. On the surface, that seems like a win for software freedom, until you look at the bigger picture: their goal is to end copyleft compilers. Their goal is to pick and choose when and how programming language software is liberated. Swift is not a shining example of Apple joining us in software freedom; rather, it's a recent example of Apple's long-term strategy to manipulate open source — giving our community occasional software freedom on Apple's own terms. Apple gives us no bread but says let them eat cake instead.

    Apple's got PR talent. They understand that merely announcing the possibility of liberating proprietary software gets press. They know that few people will follow through and determine how it went. Meanwhile, the standing story becomes: Wait, didn't Apple open source Swift anyway?. Already, that false soundbite's grip strengthens, even though the answer remains a resounding No!. However, I suspect that Apple will probably meet most of their public pledges. We'll likely see pieces of Swift 2.0 thrown over the wall. But the best stuff will be kept proprietary. That's already happening with LLVM, anyway; Apple already ships a no-source-available fork of LLVM.

    Thus, Apple's announcement incident hasn't happened in a void. Apple didn't just discover open source after years of neutrality on the topic. Apple's move is calculated, which led various industry pundits like O'Grady and Weinberg to ask hard questions (some of which are similar to mine). Yet, Apple's hype is so good, that it did convince one trade association leader.

    To me, Apple's not-yet-executed move to liberate some of the Swift 2.0 code seems a tactical stunt to win over developers who currently prefer the relatively more open nature of the Android/Linux platform. While nearly all the Android userspace applications are proprietary, and GPL violations on Android devices abound, at least the copyleft license of Linux itself provides the opportunity to keep the core operating system of Android liberated. No matter how much Swift code is released, such will never be true with Apple.

    I'm often pointing out in my recent talks how complex and treacherous the Open Source and Free Software political climate became in the last decade. Here's a great example: Apple is a wily opponent, utilizing Open Source (the cooption of Free Software) to manipulate the press and hoodwink the would-be spokespeople for Linux to support them. Many of us software freedom advocates have predicted for years that Free Software unfriendly companies like Apple would liberate more and more code under non-copyleft licenses in an effort to create walled gardens of seeming software freedom. I don't revel in my past accuracy of such predictions; rather, I feel simply the hefty weight of Cassandra's curse.

    Posted on Monday 15 June 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2015-06-03: The Satirized Is the Satirist, or Who Bought the “Journalists”?

    I watched the most recent Silicon Valley episode last night. I laughed at some parts (not as much as a usual episode) and then there was a completely unbelievable tech-related plot twist — quite out of character for that show. I was surprised.

    When the credits played, my draw dropped when I saw the episode's author was Dan Lyons. Lyons (whose work has been promoted by the Linux Foundation) once compared me to a communist and a member of organized crime (in, Forbes, a prominent publication for the wealthy) because of my work enforcing the GPL.

    In the years since Lyons' first anti-software freedom article (yes, there were more), I've watched many who once helped me enforce the GPL change positions and oppose GPL enforcement (including allies who once received criticism alongside me). Many such allies went even further — publicly denouncing my work and regularly undermining GPL enforcement politically.

    Attacks by people like Dan Lyons — journalists well connected with industry trade associations and companies — are one reason so many people are too afraid to enforce the GPL. I've wondered for years why the technology press has such a pro-corporate agenda, but it eventually became obvious to me in early 2005 when listening to yet another David Pogue Apple product review: nearly the entire tech press is bought and paid for by the very companies on which they report! The cartoonish level of Orwellian fear across our industry of GPL enforcement is but one example of many for-profit corporate agendas that people like Lyons have helped promulgate through their pro-company reporting.

    Meanwhile, I had taken Silicon Valley (until this week) as pretty good satire on the pathetic state of the technology industry today. Perhaps Alec Berg and Mike Judge just liked Lyons' script — not even knowing that he is a small part of the problem they seek to criticize. Regardless as to why his script was produced, the line between satirist and the satirized is clearly thinner than I imagined; it seems just as thin as the line between technology journalist and corporate PR employee.

    I still hope that Berg and Judge seek, just as Judge did in Office Space, to pierce the veil of for-profit corporate manipulation of employees and users alike. However, for me, the luster of their achievement fades when I realize at least some of their creative collaborators participate in the central to the problem they criticize.

    Shall we start a letter writing campaign to convince them to donate some of Silicon Valley's proceeds to Free Software charities? Or, at the very least, to convince Berg to write one of his usually excellent episodes about how the technology press is completely corrupted by the companies on which they report?

    Posted on Wednesday 03 June 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2015-02-26: Vote Karen Sandler for Red Hat's Women In Open Source Award

    I know this decision is tough, as all the candidates in the list deserve an award. However, I hope that you'll chose to vote for my friend and colleague, Karen Sandler, for the 2015 Red Hat Women in Open Source Community Award. Admittedly, most of Karen's work has been for software freedom, not Open Source (i.e., her work has been community and charity-oriented, not for-profit oriented). However, giving her an “Open Source” award is a great way to spread the message of software freedom to the for-profit corporate Open Source world.

    I realize that there are some amazingly good candidates, and I admit I'd be posting a blog post to endorse someone else (No, I won't say who :) if Karen wasn't on the ballot for the Community Award. So, I wouldn't say you backed the wrong candidate you if you vote for someone else. And, I'm imminently biased since Karen and I have worked together on Conservancy since its inception. But, if you can see your way through to it, I hope you'll give Karen your vote.

    (BTW, I'm not endorsing a candidate in the Academic Award race. I am just not familiar enough with the work of the candidates involved to make an endorsement. I even abstained from voting in that race myself because I didn't want to make an uninformed vote.)

    Posted on Thursday 26 February 2015 by Bradley M. Kuhn.

    Submit comments on this post to <>.

  • 2015-02-10: Trade Associations Are Never Neutral

    It's amazing what we let for-profit companies and their trade associations get away with. Today, Joyent announced the Node.js Foundation, in conjunction with various for-profit corporate partners and Linux Foundation (which is a 501(c)(6) trade association under the full control of for-profit companies).

    Joyent and their corporate partners claim that the Node.js Foundation will be neutral and provide open governance. Yet, they don't even say what corporate form the new organization will take, nor present its by-laws. There's no way that anyone can know if the organization will be neutral and provide open governance without at least that information.

    Meanwhile, I've spent years pointing out that what corporate form you chose matters. In the USA, if you pick a 501(c)(6) trade association (like Linux Foundation), the result is not a neutral non-profit home. Rather, a trade association simply promotes the interest of the for-profit businesses that control it. Such organizations don't have the community interests at heart, but rather the interests of the for-profit corporate masters who control the Board of Directors. Sadly, most people tend to think that if you put the word “Foundation” in the name0, you magically get a neutral home and open governance.

    Fortunately for these trade associations, they hide behind the far-too-general term non-profit, and act as if all non-profits are equal. Why do trade association representatives and companies ignore the differences between charities and trade associations? Because they don't want you to know the real story.

    Ultimately, charities serve the public good. They can do nothing else, lest they run afoul of IRS rules. Trade associations serve the business interests of the companies that join them. They can do nothing else, lest they run afoul of IRS rules. I would certainly argue the Linux Foundation has done an excellent job serving the interests of the businesses that control it. They can be commended for meeting their mission, but that mission is not one to serve the individual users and developers of Linux and other Free Software. What will the mission of the Node.js Foundation be? We really don't know, but given who's starting it, I'm sure it will be to promote the businesses around Node.js, not its users and developers.

    0Richard Fontana recently pointed out to me that it is extremely rare for trade associations to call themselves foundations outside of the Open Source and Free Software community. He found very few examples of it in the wider world. He speculated that this may be an attempt to capitalize on the credibility of the Free Software Foundation, which is older than all other non-profits in this community by at least two decades. Of course, FSF is a 501(c)(3) charity, and since there is no IRS rule about calling a 501(c)(6) trade association by the name “Foundation”, this is a further opportunity to spread confusion about who these organization serve: business interests or the general public.

    Posted on Tuesday 10 February 2015 by Bradley M. Kuhn.

    Submit comments on this post to <>.


  • 2015-01-02: Weirdness with hplip package in Debian wheezy

    I suspect this information is of limited use because it's far too vague. I didn't even file it as a Debian bug because I don't think I have enough information here to report a bug. It's not dissimilar from the issues reported in Debian bug 663868, but the system in question doesn't have foo2zjs installed. So, I filed Debian Bug 774460.

    However, in searching around the Internet for the syslog messages below, I found very few results. So, in the interest of increasing the indexing on these error messages, I include the below:

                    Jan  2 18:29:04 puggington kernel: [ 2822.256130] usb 2-1: new high-speed USB device number 16 using ehci_hcd
                    Jan  2 18:29:04 puggington kernel: [ 2822.388961] usb 2-1: New USB device found, idVendor=03f0, idProduct=5417
                    Jan  2 18:29:04 puggington kernel: [ 2822.388970] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
                    Jan  2 18:29:04 puggington kernel: [ 2822.388977] usb 2-1: Product: HP Color LaserJet CP2025dn
                    Jan  2 18:29:04 puggington kernel: [ 2822.388983] usb 2-1: Manufacturer: Hewlett-Packard
                    Jan  2 18:29:04 puggington kernel: [ 2822.388988] usb 2-1: SerialNumber: 00CNGS705379
                    Jan  2 18:29:04 puggington kernel: [ 2822.390346] usblp0: USB Bidirectional printer dev 16 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417
                    Jan  2 18:29:04 puggington udevd[25370]: missing file parameter for attr
                    Jan  2 18:29:04 puggington mtp-probe: checking bus 2, device 16: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1"
                    Jan  2 18:29:04 puggington mtp-probe: bus: 2, device: 16 was not an MTP device
                    Jan  2 18:29:04 puggington hp-mkuri: io/hpmud/model.c 625: unable to find [s{product}] support-type in /usr/share/hplip/data/models/models.dat
                    Jan  2 18:25:19 puggington kernel: [ 2596.528574] usblp0: removed
                    Jan  2 18:25:19 puggington kernel: [ 2596.535273] usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417
                    Jan  2 18:25:24 puggington kernel: [ 2601.727506] usblp0: removed
                    Jan  2 18:25:24 puggington kernel: [ 2601.733244] usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417
                    [last two repeat until unplugged]

    I really think the problem relates specifically to hplip 3.12.6-3.1+deb7u1, as I said in the bug report, the following commands resolved the problem for me:

                    # dpkg --purge hplip
                    # dpkg --purge system-config-printer-udev
                    # aptitude install system-config-printer-udev

    Posted on Friday 02 January 2015 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2014-12-23: Toward Civil Behavior

    I thought recently of a quote from a Sopranos' Season 1 episode, A Hit is a Hit, wherein Tony Soprano's neighbor proclaims for laughs at a party, Sometimes I think the only thing separating American business from the Mob is [EXPLETIVE] whacking somebody.

    The line stuck with me in the decade and a half since I heard it. When I saw the episode in 1999, my career was basically just beginning, as I was just finishing graduate school and had just begun working for the FSF. I've often wondered over these years how close that quote — offered glibly to explore a complex literary theme — matches reality.

    Organized crime drama connects with audiences because such drama explores a primal human theme: given the human capacity for physical violence and notwithstanding the Enlightenment, how and why does physical violence find its way into otherwise civilized social systems? A year before my own birth, The Godfather explored the same theme famously with the line, It's not personal, Sonny. It's strictly business. I've actually heard a would-be community leader quote that line as a warped justification for his verbally abusive behavior.

    Before I explain further, I should state my belief that physical violence always crosses a line that's as wide as the Grand Canyon. Film depictions consider the question of whether the line is blurry, but it's certainly not. However, what intrigues me is how often “businesspeople” and celebrities will literally walk right up to the edge of that Grand Canyon, and pace back and forth there for days — and even years.

    In the politics of Free, Libre and Open Source Software (FLOSS), some people regularly engage in behavior right on that line: berating, verbal abuse, and intimidation. These behaviors are consistently tolerated, accepted, and sometimes lauded in FLOSS projects and organizations. I can report from direct experience: if you think what happens on public mailing lists is bad, what happens on the private phone calls and in-person meetings is even worse. The types of behavior that would-be leaders employ would surely shock you.

    I regularly ponder whether I have a duty to disclose how much worse the back-room behavior is compared to the already abysmal public actions. The main reason I don't (until a few decades from now in my memoirs — drafting is already underway ;) is that I suspect people won't believe me. The smart abusive people know how to avoid leaving a record of their most abusive behavior perpetrated against their colleagues. I know of at least one person who will refuse to have a discussion via email or IRC and insist on in-person or telephone meetings specifically because the person outright plans to act abusively and doesn't want a record.

    While it's certainly a relief that I cannot report a single incident of actual assault in the FLOSS community, I have seen behavior escalate from ill-advised and mean political strategies to downright menacing. For example, I often receive threats of public character assassination, and character assassination in the backchannel rumor mill remains ongoing. At a USENIX conference in the late 1990s, I saw Hans Reiser screaming and wagging his finger menacingly in the face of another Linux developer. During many FLOSS community scandals, women have received threats of physical violence. Nevertheless, many FLOSS “leaders” still consider psychological intimidation a completely reasonable course of action and employ it regularly.

    How long are we going to tolerate this, and should we simply tolerate it, merely because it doesn't cross that huge chasm (on the other side of which lies physical violence)? How close are we willing to get? Is it really true that any words are fair game, and nothing you can say is off-limits? (In my experience, verbally abusive people often use that claim as an iron-clad excuse.) But, if we don't start asking these questions regularly, our community culture will continue to deteriorate.

    I realize I'm just making a statement, and not proposing real action, which (I admit) is only marginally helpful. As Tor recently showed, though, making a statement is the first step. In other words, saying “No, this behavior is not acceptable” is undoubtedly the only way to begin. Our community has been way too slow in taking that one step, so we've now got a lot of catching up to get to the right place in a reasonable timeframe.

    Posted on Tuesday 23 December 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-12-03: Help Fund Open-Wash-Free Zones

    Recently, I was forwarded an email from an executive at a 501(c)(6) trade association. In answering a question about accepting small donations for an “Open Source” project through their organization, the Trade Association Executive responded Accepting [small] donations [from individuals] is possible, but [is] generally not a sustainable way to raise funds for a project based on our experience. It's extremely difficult … to raise any meaningful or reliable amounts.

    I was aghast, but not surprised. The current Zeitgeist of the broader Open Source and Free Software community incubated his disturbing mindset. Our community suffers now from regular and active cooption by for-profit interests. The Trade Association Executive's fundraising claim — which probably even bears true in their subset of the community — shows the primary mechanism of cooption: encourage funding only from a few, big sources so they can slowly but surely dictate project policy.

    Today, more revenue than ever goes to the development of code released under licenses that respect software freedom. That belabored sentence contains the key subtlety: most Free Software communities are not receiving more funding than before, in fact, they're probably receiving less. Instead, Open Source became a fad, and now it's “cool” for for-profit companies to release code, or channel funds through some trade associations to get the code they want written and released. This problem is actually much worse than traditional open-washing. I'd call this for-profit cooption its own subtle open-washing: picking a seemingly acceptable license for the software, but “engineering” the “community” as a proxy group controlled by for-profit interests.

    This cooption phenomenon leaves the community-oriented efforts of Free Software charities underfunded and (quite often) under attack. These same companies that fund plenty of Open Source development also often oppose copyleft. Meanwhile, the majority of Free Software projects that predate the “Open Source Boom” didn't rise to worldwide fame and discover a funding bonanza. Such less famous projects still struggle financially for the very basics. For example, I participate in email threads nearly every day with Conservancy member projects who are just trying to figure out how to fund developers to a conference to give a talk about their project.

    Thus, a sad kernel of truth hides in the Trade Association Executive's otherwise inaccurate statement: big corporate donations buy influence, and a few of our traditionally community-oriented Free Software projects have been “bought” in various ways with this influx of cash. The trade associations seek to facilitate more of this. Unless we change our behavior, the larger Open Source and Free Software community may soon look much like the political system in the USA: where a few lobbyist-like organizations control the key decision-making through funding. In such a structure, who will stand up for those developers who prefer copyleft? Who will make sure individual developers receive the organizational infrastructure they need? In short, who will put the needs of individual developers and users ahead of for-profit companies?

    Become a Conservancy Supporter!

    The answer is simple: non-profit 501(c)(3) charities in our community. These organizations that are required by IRS regulation to pass a public support test, which means they must seek large portions of their revenue from individuals in the general public and not receive too much from any small group of sources. Our society charges these organizations with the difficult but attainable tasks of (a) answering to the general public, and never for-profit corporate donors, and (b) funding the organization via mechanisms appropriate to that charge. The best part is that you, the individual, have the strongest say in reaching those goals.

    Those who favor for-profit corporate control of “Open Source” projects will always insist that Free Software initiatives and plans just cannot be funded effectively via small, individual donations. Please, for the sake of software freedom, help us prove them wrong. There's even an easy way that you can do that. For just $10 a month, you can join the Conservancy Supporter program. You can help Conservancy stand up for Free Software projects who seek to keep project control in the hands of developers and users.

    Of course, I realize you might not like my work at Conservancy. If you don't, then give to the FSF instead. If you don't like Conservancy nor the FSF, then give to the GNOME Foundation. Just pick the 501(c)(3) non-profit charity in the Free Software community that you like best and donate. The future of software freedom depends on it.

    Posted on Wednesday 03 December 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-11-11: Groupon Tried To Take GNOME's Name & Failed

    [ I'm writing this last update to this post, which I posted at 15:55 US/Eastern on 2014-11-11, above the original post (and its other update), since the first text below is the most important message about this siutation. (Please note that I am merely a mundane GF member, and I don't speak for GF in any way.) ]

    There is a lesson learned here, now that Groupon has (only after public admonishing from GNOME Foundation) decided to do what GNOME Foundation asked them for from the start. Specifically, I'd like to point out how it's all too common for for-profit companies to treat non-profit charities quite badly, even when the non-profit charity is involved in an endeavor that the for-profit company nominally “supports”.

    The GNOME Foundation (GF) Board minutes are public; you can go and read them. If you do, you'll find that for many months, GF has been spending substantial time and resources to deal with this issue. They've begged Groupon to be reasonable, and Groupon refused. Then, GF (having at least a few politically savvy folks on their Board of Directors) decided they had to make the (correct) political next move and go public.

    As a professional “Free Software politician”, I can tell you from personal experience that going public with a private dispute is always a gamble. It can backfire, and thus is almost always a “last hope” before the only other option: litigation. But, Groupon's aggressive stance and deceitful behavior seems to have left GF with little choice; I'd have done the same in GF's situation. Fortunately, the gamble paid off, and Groupon caved when they realized that GF would win — both in the court of public opinion and in a real court later.

    However, this tells us something about the ethos of Groupon as a company: they are willing to waste the resources of a tiny non-profit charity (which is currently run exclusively by volunteers) simply because Groupon thought they could beat that charity down by outspending them. And, it's not as if it's a charity with a mission Groupon opposes — it's a charity operating in a space which Groupon claims to love.

    I suppose I'm reacting so strongly to this because this is exactly the kind of manipulative behavior I see every day from GPL violators. The situations are quite analogous: a non-profit charity, standing up for a legal right of a group of volunteer Free Software developers, is viewed by that company like a bug the company can squash with their shoe. The company only gives up when they realize the bug won't die, and they'll just have to give up this time and let the bug live.

    GF frankly and fortunately got off a little light. For my part, the companies (and their cronies) that oppose copyleft have called me a “copyright troll”, “guilty of criminal copyright abuse”, and also accused me of enforcing the GPL merely to “get rich” (even though my salary has been public since 1999 and is less than all of theirs). Based on my experience with GPL enforcement, I can assure you: Groupon had exactly two ways to go politically: either give up almost immediately once the dispute was public (which they did), or start attacking GF with dirty politics.

    Having personally often faced the aforementioned “next political step” by the for-profit company in similar situations, I'm thankful that GF dodged that, and we now know that Groupon is unlikely to make dirty political attacks against GF as their next move. However, please don't misread this situation: Groupon didn't “do something nice just because GF asked them to”, as the Groupon press people are no doubt at this moment feeding the tech press for tomorrow's news cycle. The real story is: “Groupon stonewalled, wasting limited resources of a small non-profit for months, and gave up only when the non-profit politically outflanked them”.

    My original post and update from earlier in the day on 2014-11-11 follows as they originally appeared:

    It's probably been at least a decade, possibly more, since I saw a a proprietary software company attempt to take the name of an existing Free Software project. I'm very glad GNOME Foundation had the forethought to register their trademark, and I'm glad they're defending it.

    It's important to note that names are really different from copyrights. I've been a regular critic of the patent and copyright systems, particularly as applied to software. However, trademarks, while the system has some serious flaws, has at its root a useful principle: people looking for stuff they really want shouldn't be confused by what they find. (I remember as a kid the first time I got a knock-off toy and I was quite frustrated and upset for being duped.) Trademark law is designed primarily to prevent the public from being duped.

    Trademark is also designed to prevent a new actor in the marketplace from gaining advantage using the good name of an existing work. Of course, that's what Groupon is doing here, but Groupon's position seems to have come from the sleaziest of their attorneys and it's completely disingenuous Oh, we never heard of GNOME and we didn't even search the trademark database before filing. Meanwhile, now that you've contacted us, we're going to file a bunch more trademarks with your name in them. BTW, the odds that they are lying about never searching the USTPO database for GNOME are close to 100%. I have been involved with registration of many a trademark for a Free Software project: the first thing you do is search the trademark database. The USPTO even provides a public search engine for it!

    Finally, GNOME's legal battle is not merely their own. Proprietary software companies always think they can bully Free Software projects. They figure Free Software just doesn't matter that much and doesn't have the resources to fight. Of course, one major flaw in the trademark system is that it is expensive (because of the substantial time investment needed by trademark experts) to fight an attack like this. Therefore, please donate to the GNOME Foundation to help them in this fight. This is part of a proxy war against all proprietary software companies that think they can walk all over a Free Software project. Thus, this issue relates to many others in our community. We have to show the wealthy companies that Free Software projects with limited resources are not pushovers, but non-profit charities like GNOME Foundation cannot do this without your help.

    Update on 2014-11-11 at 12:23 US/Eastern: Groupon responded to the GNOME Foundation publicly on their “engineering” site. I wrote the following comment on that page and posted it, but of course they refused to allow me to post a comment0, so I've posted my comment here:

    If you respected software freedom and the GNOME project, then you'd have already stop trying to use their good name (which was trademarked before your company was even founded) to market proprietary software. You say you'd be glad to look for another name; I suspect that was GNOME Foundation's first request to you, wasn't it? Are you saying the GNOME Foundation has never asked you to change the name of the product you've been calling GNOME?

    Meanwhile, your comments about “open source” are suspect at best. Most technology companies these days have little choice but to interact in some ways with open source. I see of course, that Groupon has released a few tidbits of code, but your website is primarily proprietary software. (I notice, for example, a visit just to your welcome page at attempts to install a huge amount of proprietary Javascript on my machine — lucky I use NoScript to reject it). Therefore, your argument that you “love open source” is quite dubious. Someone who loves open source doesn't just liberate a few tidbits of their code, they embrace it fully. To be accurate, you probably should have said: We like open source a little bit.

    Finally, your statement, which is certainly well-drafted Orwellian marketing-speak, doesn't actually answer any of the points the GNOME Foundation raised with you. According to the GNOME Foundation, you were certainly communicating, but in the meantime you were dubiously registering more infringing trademarks with the USPTO. The only reasonable conclusion is that you used the communication to buy time to stab GNOME Foundation in the back further. I do a lot of work defending copyleft communities against companies that try to exploit and mistreat those communities, and yours are the exact types of manipulative tactics I often see in those negotiations.

    0While it's of course standard procedure for website to refuse comments, I find it additionally disingenuous when a website looks like it accepts comments, but then refuses some. Obviously, I don't think trolls should be given a free pass to submit comments, but I rather like the solution of simply full disclosure: Groupon should disclose that they are screening some comments. This, BTW, is why I just use a third party application ( for my comments. Anyone can post. :)

    Posted on Tuesday 11 November 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-11-08: Branding GNU Mailman Headers & Footers

    As always, when something takes me a while to figure out, I try to post the generally useful technical information on my blog. For the new site, I've been trying to get all the pages branded properly with the header/footer. This was straightforward for ikiwiki (which hosts the main site), but I spent an hour searching around this morning for how to brand the GNU Mailman instance on

    Ultimately, here's what I had to do to get everything branded, and I'm still not completely sure I found every spot. It seems that if someone wanted to make a useful patch to GNU Mailman, you could offer up a change that unifies the HTML templating and branding. In the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7 (wheezy), here's what you have to do:

    First, some of the branding details are handled in the Python code itself, so my first action was:

                        # cd /var/lib/mailman/Mailman
                        # cp -pa /etc/mailman
                        # ln -sf /etc/mailman/
    I did this because is not a file that the Debian package install for Mailman puts in /etc/mailman, and I wanted to keep track with etckeeper that I was modifying that file.

    The primary modifications that I made to that file were in the MailmanLogo() method, to which I added a custom footer, and to Document.Format() method, to which I added a custom header (at least when not self.suppress_head). The suppress_head thing was a red flag that told me it was likely not enough merely to change these methods to get a custom header and footer on every page. I was right. Ultimately, I had to also change nearly all the HTML files in /etc/mailman/en/, each of which needed different changes based on what files they were, and there was no clear guideline. I guess I could have added <MM-Mailman-Footer> to every file that had a </BODY> but didn't have that yet to get my footer everywhere, but in the end, I custom-hacked the whole thing.

    My full patches that I applied to all the mailman files is available on, in case you want to see how I did it.

    Posted on Saturday 08 November 2014 by Bradley M. Kuhn.

    Submit comments on this post to <>.


  • 2014-10-10: Always Follow the Money

    Selena Larson wrote an article describing the Male Allies Plenary Panel at the Anita Borg Institute's Grace Hopper Celebration on Wednesday night. There is a video available of the panel (that's the youtube link, the links on Anita Borg Institute's website don't work with Free Software).

    Selena's article pretty much covers it. The only point that I thought useful to add was that one can “follow the money” here. Interestingly enough, Facebook, Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event. I find it a strange correlation that not one man on this panel is from a company that didn't sponsor the event. Are there no male allies to the cause of women in tech worth hearing from who work for companies that, say, don't have enough money to sponsor the event? Perhaps that's true, but it's somewhat surprising.

    Honest US Congresspeople often say that the main problem with corruption of campaign funds is that those who donate simply have more access and time to make their case to the congressional representatives. They aren't buying votes; they're buying access for conversations. (This was covered well in This American Life, Episode 461).

    I often see a similar problem in the “Open Source” world. The loudest microphones can be bought by the highest bidder (in various ways), so we hear more from the wealthiest companies. The amazing thing about this story, frankly, is that buying the microphone didn't work this time. I'm very glad the audience refused to let it happen! I'd love to see a similar reaction at the corporate-controlled “Open Source and Linux” conferences!

    Update later in the day: The conference I'm commenting on above is the same conference where Satya Nadella, CEO of Microsoft, said that women shouldn't ask for raises, and Microsoft is also a top-tier sponsor of the conference. I'm left wondering if anyone who spoke at this conference didn't pay for the privilege of making these gaffes.

    Posted on Friday 10 October 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-09-26: IRS Tax-Exempt Status & FaiF 0x4E

    Historically, I used to write a blog post for each episode of the audcast, Free as in Freedom that Karen Sandler and I released. However, since I currently do my work on FaiF exclusively as a volunteer, I often found it difficult to budget time for a blog post about each show.

    However, enough happened in between when Karen and I recorded FaiF 0x4E and when it was released earlier this week that I thought I'd comment on those events.

    First, with regard to the direct content of the show, I've added some detail in the 0x4E show notes about additional research I did about various other non-software-related non-profit organizations that I mention in the show.

    The primary thrust of Karen's and my discussion on the show, though, regarded how the IRS is (somewhat strangely) the regulatory body for various types of organizational statuses, and that our legislation lumps many disparate activities together under the term “non-profit organizations” in the USA. The types of these available, outlined in 26 USC§501(c), vary greatly in what they do, and in what the IRS intends for them to do.

    Interestingly, a few events occurred in mainstream popular culture since FaiF 0x4E's recording that relate to this subject. First, on John Oliver's Last Week Tonight Episode 18 on 2014-09-21 (skip to 08:30 in the video to see the part I'm commenting on), John actually pulled out a stack of interlocking Form 990s from various related non-profit organizations and walked through some details of misrepresentation to the public regarding the organization's grant-making activities. As an avid reader of Form 990s, I was absolutely elated to see a popular comic pundit actually assign his staff the task of reviewing Form 990s to follow the money. (Although I wish he hadn't wasted the paper to print them out merely to make a sight gag.)

    Meanwhile, the failure of just about everyone to engage in such research remains my constant frustration. I'm often amazed that people judge non-profit organizations merely based on a (Stephen-Colbert-style) gut reaction of truthiness rather than researching the budgetary actions of such organizations. Given that tendency, the mandatory IRS public disclosures for all these various non-profits end up almost completely hidden in plain sight.

    Granted, you sometimes have to make as many as three clicks, and type the name of the organization twice on Foundation Center's Form 990 finder to find these documents. That's why I started to maintain the FLOSS Foundation gitorious repository of Form 990s of all the orgs related to Open Source and Free Software — hoping that a git cloneable solution would be more appealing to geeks. Yet, it's rare that anyone besides those of us who maintain the repository read these. The only notable exception is Brian Proffitt's interesting article back in March 2012, which made use of FLOSS Foundation Form 990 data. But, AFAIK, that's the only time the media has looked at any FLOSS Foundations' Form 990s.

    The final recent story related to non-profits was linked to by Conservancy Board of Directors member, Mike Linksvayer on In the article from Slate Mike references there, Jordan Weissmann points out that the NFL is a 501(c)(6). Weissmann further notes that permission for football to be classified under 501(c)(6) rules seems like pork barrel politics in the first place.

    These disparate events — the Tea Party attacks against IRS 501(c)(4) denials, John Oliver's discussion of the Miss America Organization, Weissmann's specific angle in reporting the NFL scandals, and (more parochially) Yorba's 501(c)(3) and OpenStack Foundation's 501(c)(6) application denials — are brief moments of attention on non-profit structures in the USA. In such moments, we're invited to dig deeper and understand what is really going on, using public information that's readily accessible. So, why do so many people use truthiness rather than data to judge the performance and behavior of non-profit organizations? Why do so many funders, grant-makers and donors admit to never even reading the Form 990 of the organizations whom they support and with whom they collaborate? I ask, of course, rhetorically, but I'd be delighted if there is any answer beyond: “because they're lazy”.

    Posted on Friday 26 September 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-09-22: The LinkedIn Lawsuit Is a Step Forward But Doesn't Go Far Enough

    Years ago, I wrote a blog post about how I don't use Google Plus, Google Hangouts, Facebook, Twitter, Skype, LinkedIn or other proprietary network services. I talked in that post about how I'm under constant and immense social pressure to use these services. (It's often worse than the peer pressure one experiences as a teenager.)

    I discovered a few months ago, however, that one form of this peer pressure was actually a product of nefarious practices by one of the vendors — namely Linked In. Today, I learned a lawsuit is now proceeding against Linked In on behalf of the users whose contacts were spammed repeatedly by Linked In's clandestine use of people's address books.

    For my part, I suppose I should be glad that I'm “well connected”, but that means I get multiple emails from Linked In almost every single day, and indeed, as the article (linked to above) states, each person's spam arrives three times over a period of weeks. I was initially furious at people whom I'd met for selling my contact information to Linked In (which of course, they did), but many of them indeed told me they were never informed by Linked In that such spam generation would occur once they'd complete the sale of all their contact data to Linked In.

    This is just yet another example of proprietary software companies mistreating users. If we had a truly federated Linked-In-like service, we'd be able to configure our own settings in this regard. But, we don't have that. (I don't think anyone is even writing one.) This is precisely why it's important to boycott these proprietary solutions, so at the very least, we don't complacently forget that they're proprietary, or inadvertently mistreat our colleagues who don't use those services in the interim.

    Finally, the lawsuit seems to focus solely on the harm caused to Linked In users who were embarrassed professionally. (I can say that indeed I was pretty angry at many of my contacts for a while when I thought they were choosing to spam me three times each, so that harm is surely real.) But the violation CAN-SPAM act by Linked In should also not be ignored and I hope someone will take action on that point, too.

    Posted on Monday 22 September 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-09-11: Understanding Conservancy Through the GSoC Lens

    [ A version of this post originally appeared on the Google Open Source Blog, and was cross-posted on Conservancy's blog. ]

    Software Freedom Conservancy, Inc. is a 501(c)(3) non-profit charity that serves as a home to Open Source and Free Software projects. Such is easily said, but in this post I'd like to discuss what that means in practice for an Open Source and Free Software project and why such projects need a non-profit home. In short, a non-profit home makes the lives of Free Software developers easier, because they have less work to do outside of their area of focus (i.e., software development and documentation).

    As the summer of 2014 ends, Google Summer of Code (GSoC) coordnation work exemplifies the value a non-profit home brings its Free Software projects. GSoC is likely the largest philanthropic program in the Open Source and Free Software community today. However, one of the most difficult things for organizations that seek to take advantage of such programs is the administrative overhead necessary to take full advantage of the program. Google invests heavily in making it easy for organizations to participate in the program — such as by handling the details of stipend payments to students directly. However, to take full advantage of any philanthropic program, the benefiting organization has some work to do. For its member projects, Conservancy is the organization that gets that logistical work done.

    For example, Google kindly donates $500 to the mentoring organization for every student it mentors. However, these funds need to go “somewhere”. If the funds go to an individual, there are two inherent problems. First, that individual is responsible for taxes on that income. Second, funds that belong to an organization as a whole are now in the bank account of a single project leader. Conservancy solves both those problems: as a tax-exempt charity, the mentor payments are available for organizational use under its tax exemption. Furthermore, Conservancy maintains earmarked funds for each of its projects. Thus, Conservancy keeps the mentor funds for the Free Software project, and the project leaders can later vote to make use of the funds in a manner that helps the project and Conservancy's charitable mission. Often, projects in Conservancy use their mentor funds to send developers to important conferences to speak about the project and recruit new developers and users.

    Meanwhile, Google also offers to pay travel expenses for two mentors from each mentoring organization to attend the annual GSoC Mentor Summit (and, this year, it's an even bigger Reunion conference!). Conservancy handles this work on behalf of its member projects in two directions. First, for developers who don't have a credit card or otherwise are unable to pay for their own flight and receive reimbursement later, Conservancy staff book the flights on Conservancy's credit card. For the other travelers, Conservancy handles the reimbursement details. On the back end of all of this, Conservancy handles all the overhead annoyances and issues in requesting the POs from Google, invoicing for the funds, and tracking to ensure payment is made. While the Google staff is incredibly responsive and helpful on these issues, the Googlers need someone on the project's side to take care of the details. That's what Conservancy does.

    GSoC coordination is just one of the many things that Conservancy does every day for its member projects. If there's anything other than software development and documentation that you can imagine a project needs, Conservancy does that job for its member projects. This includes not only mundane items such as travel coordination, but also issues as complex as trademark filings and defense, copyright licensing advice and enforcement, governance coordination and mentoring, and fundraising for the projects. Some of Conservancy's member projects have been so successful in Conservancy that they've been able to fund developer salaries — often part-time but occasionally full-time — for years on end to allow them to focus on improving the project's software for the public benefit.

    Finally, if your project seeks help with regard to handling its GSoC funds and travel, or anything else mentioned on Conservancy's list of services to member projects, Conservancy is welcoming new applications for membership. Your project could join Conservancy's more than thirty other member projects and receive these wonderful services to help your community grow and focus on its core mission of building software for the public good.

    Posted on Thursday 11 September 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-07-15: Why The Kallithea Project Exists

    [ This is a version of an essay that I originally published on Conservancy's blog ].

    Eleven days ago, Conservancy announced Kallithea. Kallithea is a GPLv3'd system for hosting and managing Mercurial and Git repositories on one's own servers. As Conservancy mentioned in its announcement, Kallithea is indeed based on code released under GPLv3 by RhodeCode GmbH. Below, I describe why I was willing to participate in helping Conservancy become a non-profit home to an obvious fork (as this is the first time Conservancy ever welcomed a fork as a member project).

    The primary impetus for Kallithea is that more recent versions of RhodeCode GmbH's codebase contain a very unorthodox and ambiguous license statement, which states:

    (1) The Python code and integrated HTML are licensed under the GPLv3 license as is RhodeCode itself.
    (2) All other parts of the RhodeCode including, but not limited to the CSS code, images, and design are licensed according to the license purchased.

    Simply put, this licensing scheme is — either (a) a GPL violation, (b) an unclear license permission statement under the GPL which leaves the redistributor feeling unclear about their rights, or (c) both.

    When members of the Mercurial community first brought this license to my attention about ten months ago, my first focus was to form a formal opinion regarding (a). Of course, I did form such an opinion, and you can probably guess what that is. However, I realized a few weeks later that this analysis really didn't matter in this case; the situation called for a more innovative solution.

    Indeed, I recalled at that time the disputes between AT&T and University of California at Berkeley over BSD. In that case, while nearly all of the BSD code was adjudicated as freely licensed, the dispute itself was painful for the BSD community. BSD's development slowed nearly to a standstill for years while the legal disagreement was resolved. Court action — even if you're in the right — isn't always the fastest nor best way to push forward an important Free Software project.

    In the case of RhodeCode's releases, there was an obvious and more productive solution. Namely, the 1.7.2 release of RhodeCode's codebase, written primarily by Marcin Kuzminski was fully released under GPLv3-only, and provided an excellent starting point to begin a GPLv3'd fork. Furthermore, some of the improved code in the 2.2.5 era of RhodeCode's codebase were explicitly licensed under GPLv3 by RhodeCode GmbH itself. Finally, many volunteers produced patches for all versions of RhodeCode's codebase and released those patches under GPLv3, too. Thus, there was already a burgeoning GPLv3-friendly community yearning to begin.

    My primary contribution, therefore, was to lead the process of vetting and verifying a completely indisputable GPLv3'd version of the codebase. This was extensive and time consuming work; I personally spent over 100 hours to reach this point, and I suspect many Kallithea volunteers have already spent that much and more. Ironically, the most complex part of the work so far was verifying and organizing the licensing situation regarding third-party Javascript (released under a myriad of various licenses). You can see the details of that work by reading the revision history of Kallithea (or, you can read an overview in Kallithea's LICENSE file).

    Like with any Free Software codebase fork, acrimony and disagreement led to Kallithea's creation. However, as the person who made most of the early changesets for Kallithea, I want to thank RhodeCode GmbH for explicitly releasing some of their work under GPLv3. Even as I hereby reiterate publicly my previously private request that RhodeCode GmbH correct the parts of their licensing scheme that are (at best) problematic, and (at worst) GPL-violating, I also point out this simple fact to those who have been heavily criticizing and admonishing RhodeCode GmbH: the situation could be much worse! RhodeCode could have simply never released any of their code under the GPLv3 in the first place. After all, there are many well-known code hosting sites that refuse to release any of their code (or release only a pittance of small components). By contrast, the GPLv3'd RhodeCode software was nearly a working system that helped bootstrap the Kallithea community. I'm grateful for that, and I welcome RhodeCode developers to contribute to Kallithea under GPLv3. I note, of course, that RhodeCode developers sadly can't incorporate any of our improvements in their codebase, due to their problematic license. However, I extend again my offer (also made privately last year) to work with RhodeCode GmbH to correct its licensing problems.

    Posted on Tuesday 15 July 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-06-18: USPTO Affirms Copyleft-ish Hack on Trademark

    I don't often say good things about the USPTO, so I should take the opportunity: the trademark revocation hack to pressure the change of the name of the sports team called the Redskins was a legal hack in the same caliber as copyleft. Presumably Blackhorse deserves the credit for this hack, but the USPTO showed it was sound.

    Update, 2014-06-19 & 2014-06-20: A few have commented that this isn't a hack in the way copyleft is. They have not made an argument for this, only pointed that the statue prohibits racially disparaging trademarks. I thought it would be obvious why I was calling this a copyleft-ish hack, but I guess I need to explain. Copyleft uses copyright law to pursue a social good unrelated to copyright at all: it uses copyright to promote a separate social aim — the freedom of software users. Similarly, I'm strongly suspect Blackhorse doesn't care one wit about trademarks and why they exist or even that they exist. Blackhorse is using the trademark statute to put financial pressure on an institution that is doing social harm — specifically, by reversing the financial incentives of the institution bent on harm. This is analogous to the way copyleft manipulates the financial incentives of software development toward software freedom using the copyright statute. I explain more in this comment.

    Fontana's comments argue that the UPSTO press release is designed to distance itself from the TTAB's decision. Fontana's point is accurate, but the TTAB is ultimately part of the USPTO. Even if some folks at the USPTO don't like the TTAB's ruling, the USPTO is actually arguing with itself, not a third party. Fontana further pointed out in turn that the TTAB is an Article I tribunal, so there can be Executive Branch “judges” who have some level of independence. Thanks to Fontana for pointing to that research; my earlier version of this post was incorrect, and I've removed the incorrect text. (Pam Chestek, BTW, was the first to point this out, but Fontana linked to the documentation.)

    Posted on Wednesday 18 June 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-06-11: Node.js Removes Its CLA

    I've had my disagreements with Joyent's management of the Node.js project. In fact, I am generally auto-skeptical of any Open Source and/or Free Software project run by a for-profit company. However, I also like to give credit where credit is due.

    Specifically, I'd like to congratulate Joyent for making the right decision today to remove one of the major barriers to entry for contribution to the Node.js project: its CLA. In an announcement today (see section labeled “Easier Contribution”, Joyent announced Joyent no longer requires contributors to sign the CLA and will (so it seems) accept contributions simply licensed under the MIT-permissive license. In short, Node.js is, as of today, an inbound=outbound project.

    While I'd prefer if Joyent would in addition switch the project to the Apache License 2.0 — or even better, the Affero GPLv3 — I realize that neither of those things are likely to happen. :) Given that, dropping the CLA is the next best outcome possible, and I'm glad it has happened.

    For further reading on my positions against CLAs, please see these two older blog posts:

    Posted on Wednesday 11 June 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-06-09: Why Your Project Doesn't Need a Contributor Licensing Agreement

    [ This is a version of an essay that I originally published on Conservancy's blog ].

    For nearly a decade, a battle has raged between two distinct camps regarding something called Contributor Licensing Agreements (CLAs). I've previously written a long treatise on the issue. This article below is a summary on the basics of why CLA's aren't necessary.

    In the most general sense, a CLA is a formal legal contract between a contributor to a FLOSS project and the “project” itself0. Ostensibly, this agreement seeks to assure the project, and/or its governing legal entity, has the appropriate permissions to incorporate contributed patches, changes, and/or improvements to the software and then distribute the resulting larger work.

    In practice, most CLAs in use today are deleterious overkill for that purpose. CLAs simply shift legal blame for any patent infringement, copyright infringement, or other bad acts from the project (or its legal entity) back onto its contributors. Meanwhile, since vetting every contribution for copyright and/or patent infringement is time-consuming and expensive, no existing organization actually does that work; it's unfeasible to do so effectively. Thus, no one knows (in the general case) if the contributors' assurances in the CLA are valid. Indeed, since it's so difficult to determine if a given work of software infringes a patent, it's highly likely that any contributor submitting a patent-infringing patch did so inadvertently and without any knowledge that the patent even existed — even regarding patents controlled by their own company1.

    The undeniable benefit to CLAs relates to contributions from for-profit companies who likely do hold patents that read on the software. It's useful to receive from such companies (whenever possible) a patent license for any patents exercised in making, using or selling the FLOSS containing that company's contributions. I agree that such an assurance is nice to have, and I might consider supporting CLAs if there was no other cost associated with using them. However, maintenance of CLA-assent records requires massive administrative overhead.

    More disastrously, CLAs require the first interaction between a FLOSS project and a new contributor to involve a complex legal negotiation and a formal legal agreement. CLAs twist the empowering, community-oriented, enjoyable experience of FLOSS contribution into an annoying exercise in pointless bureaucracy, which (if handled properly) requires a business-like, grating haggle between necessarily adverse parties. And, that's the best possible outcome. Admittedly, few contributors actually bother to negotiate about the CLA. CLAs frankly rely on our “Don't Read & Click ‘Agree’” culture — thereby tricking contributors into bearing legal risk. FLOSS project leaders shouldn't rely on “gotcha” fine print like car salespeople.

    Thus, I encourage those considering a CLA to look past the “nice assurances we'd like to have — all things being equal” and focus on the “what legal assurances our FLOSS project actually needs to assure its thrives”. I've spent years doing that analysis; I've concluded quite simply: in this regard, all a project and its legal home actually need is a clear statement and/or assent from the contributor that they offer the contribution under the project's known FLOSS license. Long ago, the now famous Open Source lawyer Richard Fontana dubbed this legal policy with the name “inbound=outbound”. It's a powerful concept that shows clearly the redundancy of CLAs.

    Most importantly, “inbound=outbound” makes a strong and correct statement about the FLOSS license the project chooses. FLOSS licenses must contain all the legal terms that are necessary for a project to thrive. If the project is unwilling to accept (inbound) contribution of code under the terms of the license it chose, that's a clear indication that the project's (outbound) license has serious deficiencies that require immediate remedy. This is precisely why I urge projects to select a copyleft license with a strong patent clause, such as the GPLv3. With a license like that, CLAs are unnecessary.

    Meanwhile, the issue of requesting the contributors' assent to the projects' license is orthogonal to the issue of CLAs. I do encourage use of clear systems (either formal or informal) for that purpose. One popular option is called the Developer Certificate of Origin (DCO). Originally designed for the Linux project and published by the OSDL under the CC-By-SA license, the DCO is a mechanism to assure contributors have confirmed their right to license their contribution under the project's license. Typically, developers indicate their agreement to the DCO with a specially-formed tag in their DVCS commit log. Conservancy's Evergreen, phpMyAdmin, and Samba projects all use modified versions of the DCO.

    Conservancy's Selenium project uses a license assent mechanism somewhat closer to a formal CLA. In this method, the contributors must complete a special online form wherein they formally assent to the license of the project. The project keeps careful records of all assents separately from the code repository itself. This mechanism is a bit heavy-weight, but ultimately simply formally implements the same inbound=outbound concept.

    However, most projects use the same time-honored and successful mechanism used throughout the 35 year history of the Free Software community. Simply, they publish clearly in their developer documentation and/or other key places (such as mailing list subscription notices) that submissions using the normal means to contribute to the project — such as patches to the mailing list or pull and merge requests — indicate the contributors' assent for inclusion of that software in the canonical version under the project's license.

    Ultimately, CLAs are much ado about nothing. Lawyers are trained to zealously represent their clients, and as such they often seek to an outcome that maximizes leverage of clients' legal rights, but they typically ignore the other important benefits that are outside of their profession. The most ardent supporters of CLAs have yet to experience first-hand the arduous daily work required to manage a queue of incoming FLOSS contributions. Those of us who have done the latter easily see that avoiding additional barriers to entry is paramount. While a beautifully crafted CLA — jam-packed with legalese that artfully shifts all the blame off to the contributors — may make some corporate attorneys smile, but I've never seen such bring anything but a frown and a sigh from FLOSS developers.

    0Only rarely does an unincorporated, unaffiliated project request CLAs. Typically, CLAs name a corporate entity — a non-profit charity (like Conservancy), a trade association (like OpenStack Foundation), or a for-profit company, as its ultimate beneficiary. On rare occasions, the beneficiary of a CLA is a single individual developer.

    1I've yet to meet any FLOSS developer who has read their own employer's entire patent portfolio.

    Posted on Monday 09 June 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-06-08: Resolving Weirdness In Thinkpad T60 Hotkeys

    In keeping with my tendency to write a blog post about any technical issue I find that takes me more than five minutes to figure out when searching the Internet, I include below a resolution to a problem that took me, embarrassingly, nearly two and half hours across two different tries to figure out.

    The problem appeared when I took Debian 7 (wheezy) laptop hard drive out of an Lenovo Thinkpad T61 that I was using that failed and into Lenovo Thinkpad T60. (I've been trying to switch fully to the T60 for everything because it is supported by Coreboot.)

    image of a Lenovo T60 Thinkpad keyboard with volume buttons circled in purple. When I switched, everything was working fine, except the volume buttons on the Thinkpad T60 (those three buttons in the top left hand corner of the keyboard, shown circled in purple in the image on the right) no longer did what I expected. I expected they would ultimately control PulseAudio volume, which does the equivalent of pactl set-sink-mute 0 0 and appropriate pactl set-sink-volume 0 commands for my sound card. I noticed this because when PulseAudio is running, and you type those commands on the command line, all functions properly with the volume, and, when running under X, I see the popup windows coming from my desktop environment showing the volume changes. So, I knew nothing was wrong with the sound configuration when I switched the hard drive to a new machine, since the command line tools worked and did the right things. Somehow, the buttons weren't sending the same commands in whatever manner they were used to.

    I assumed at first that the buttons simply generated X events. It turns out they do, but the story there is a bit more complex. When I ran xev I saw those buttons did not, in fact, generate any X events. So, that makes it clear that nothing from X windows “up” (i.e, to the desktop software) had anything to do with the situation.

    So, I first proceed to research whether these volume keys were supposed to generate X events. I discovered that there were indeed XF86VolumeUp, XF86VolumeDown and XF86VolumeMute key events (I'd seen those before, in fact, doing similar research years ago). However, the advice online was highly conflicting whether or not the best way to solve this is to have them generate X events. Most of the discussions I found assumed the keys were already generating X events and had advice about how to bind those keys to scripts or to your desktop setup of choice0.

    I found various old documentation about the thinkpad_acpi daemon, which I quickly found quickly was out of date since long ago that had been incorporated into Linux's ACPI directly and didn't require additional daemons. This led me to just begin poking around about how the ACPI subsystem for ACPI keys worked.

    I quickly found the xev equivalent for acpi: acpi_listen. This was the breakthrough I needed to solve this problem. I ran acpi_listen and discovered that while other Thinkpad key sequences, such as Fn-Home (to increase brightness), generated output like:

                    video/brightnessup BRTUP 00000086 00000000 K
                    video/brightnessup BRTUP 00000086 00000000
    but the volume up, down, and mute keys generated no output. Therefore, it's pretty clear at this point that the problem is something related to configuration of ACPI in some way. I had a feeling this would be hard to find a solution for.

    That's when I started poking around in /proc, and found that /proc/acpi/ibm/volume was changing each time I hit a these keys. So, Linux clearly was receiving notice that these keys were pressed. So, why wasn't the acpi subsystem notifying anything else, including whatever interface acpi_listen talks to?

    Well, this was a hard one to find an answer to. I have to admit that I found the answer through pure serendipity. I had already loaded this old bug report for an GNU/Linux distribution waning in popularity and found that someone resolved the ticket with the command:

                    cp /sys/devices/platform/thinkpad_acpi/hotkey_all_mask /sys/devices/platform/thinkpad_acpi/hotkey_mask
    This command:
                    # cat /sys/devices/platform/thinkpad_acpi/hotkey_all_mask /sys/devices/platform/thinkpad_acpi/hotkey_mask 
    quickly showed that that the masks didn't match. So I did:
                    # cat /sys/devices/platform/thinkpad_acpi/hotkey_all_mask > /sys/devices/platform/thinkpad_acpi/hotkey_mask 
    and that single change caused the buttons to work again as expected, including causing the popup notifications of volume changes and the like.

    Additional searching show this hotkey issue is documented in Linux, in its Thinkpad ACPI documentation, which states:

    The hot key bit mask allows some control over which hot keys generate events. If a key is "masked" (bit set to 0 in the mask), the firmware will handle it. If it is "unmasked", it signals the firmware that thinkpad-acpi would prefer to handle it, if the firmware would be so kind to allow it (and it often doesn't!).

    I note that on my system, running the command the document recommends to reset to defaults yields me back to the wrong state:

                    # cat /proc/acpi/ibm/hotkey 
                    status:         enabled
                    mask:           0x00ffffff
                    commands:       enable, disable, reset, <mask>
                    # echo reset > /proc/acpi/ibm/hotkey 
                    # cat /proc/acpi/ibm/hotkey 
                    status:         enabled
                    mask:           0x008dffff
                    commands:       enable, disable, reset, <mask>
                    # echo 0xffffffff > /proc/acpi/ibm/hotkey

    So, I added that last command above to restore it to enabled Linux's control of all the ACPI hot keys, which I suspect is what I want. I'll update the post if doing that causes other problems that I hadn't seen before. I'll also update the post to note whether this setting is saved over reboots, as I haven't rebooted the machine since I did this. :)

    0Interestingly, as has happened to me often recently, much of the most useful information that I find about any complex topic regarding how things work in modern GNU/Linux distributions is found on the Arch or Crunchbang online fora and wikis. It's quite interesting to me that these two distributions appear to be the primary place where the types of information that every distribution once needed to provide are kept. Their wikis are becoming the canonical references of how a distribution is constructed, since much of the information found therein applies to all distributions, but distributions like Fedora and Debian attempt to make it less complex for the users to change the configuration.

    Posted on Sunday 08 June 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-06-04: Be Sure to Comment on FCC's NPRM 14-28

    I remind everyone today, particularly USA Citizens, to be sure to comment on the FCC's Notice of Proposed Rulemaking (NPRM) 14-28. They even did a sane thing and provided an email address you can write to rather than using their poorly designed web forums, but PC Magazine published relatively complete instructions for other ways. The deadline isn't for a while yet, but it's worth getting it done so you don't forget. Below is my letter in case anyone is interested.

    Dear FCC Commissioners,

    I am writing in response to NPRM 14-28 — your request for comments regarding the “Open Internet”.

    I am a trained computer scientist and I work in the technology industry. (I'm a software developer and software freedom activist.) I have subscribed to home network services since 1989, starting with the Prodigy service, and switching to Internet service in 1991. Initially, I used a PSTN single-pair modem and eventually upgraded to DSL in 1999. I still have a DSL line, but it's sadly not much faster than the one I had in 1999, and I explain below why.

    In fact, I've watched the situation get progressively worse, not better, since the Telecommunications Act of 1996. While my download speeds are little bit faster than they were in the late 1990s, I now pay substantially more for only small increases of upload speeds, even in a major urban markets. In short, it's become increasingly more difficult to actually purchase true Internet connectivity service anywhere in the USA. But first, let me explain what I mean by “true Internet connectivity”.

    The Internet was created as a peer-to-peer medium where all nodes were equal. In the original design of the Internet, every device has its own IP address and, if the user wanted, that device could be addressed directly and fully by any other device on the Internet. For its part, the network in between the two nodes were intended to merely move the packets between those nodes as quickly as possible — treating all those packets the same way, and analyzing those packets only with publicly available algorithms that everyone agreed were correct and fair.

    Of course, the companies who typically appeal to (or even fight) the FCC want the true Internet to simply die. They seek to turn the promise of a truly peer-to-peer network of equality into a traditional broadcast medium that they control. They frankly want to manipulate the Internet into a mere television broadcast system (with the only improvement to that being “more stations”).

    Because of this, the three following features of the Internet — inherent in its design — that are now extremely difficult for individual home users to purchase at reasonable cost from so-called “Internet providers” like Time Warner, Verizon, and Comcast:

    • A static IP address, which allows the user to be a true, equal node on the Internet. (And, related: IPv6 addresses, which could end the claim that static IP addresses are a precious resource.)
    • An unfiltered connection, that allows the user to run their own webserver, email server and the like. (Most of these companies block TCP ports 80 and 25 at the least, and usually many more ports, too).
    • Reasonable choices between the upload/download speed tradeoff.

    For example, in New York, I currently pay nearly $150/month to an independent ISP just to have a static, unfiltered IP address with 10 Mbps down and 2 Mbps up. I work from home and the 2 Mbps up is incredibly slow for modern usage. However, I still live in the Slowness because upload speeds greater than that are extremely price-restrictive from any provider.

    In other words, these carriers have designed their networks to prioritize all downloading over all uploading, and to purposely place the user behind many levels of Network Address Translation and network filtering. In this environment, many Internet applications simply do not work (or require complex work-arounds that disable key features). As an example: true diversity in VoIP accessibility and service has almost entirely been superseded by proprietary single-company services (such as Skype) because SIP, designed by the IETF (in part) for VoIP applications, did not fully anticipate that nearly every user would be behind NAT and unable to use SIP without complex work-arounds.

    I believe this disastrous situation centers around problems with the Telecommunications Act of 1996. While the ILECs are theoretically required to license network infrastructure fairly at bulk rates to CLECs, I've frequently seen — both professional and personally — wars waged against CLECs by ILECs. CLECs simply can't offer their own types of services that merely “use” the ILECs' connectivity. The technical restrictions placed by ILECs force CLECs to offer the same style of service the ILEC offers, and at a higher price (to cover their additional overhead in dealing with the CLECs)! It's no wonder there are hardly any CLECs left.

    Indeed, in my 25 year career as a technologist, I've seen many nasty tricks by Verizon here in NYC, such as purposeful work-slowdowns in resolution of outages and Verizon technicians outright lying to me and to CLEC technicians about the state of their network. For my part, I stick with one of the last independent ISPs in NYC, but I suspect they won't be able to keep their business going for long. Verizon either (a) buys up any CLEC that looks too powerful, or, (b) if Verizon can't buy them, Verizon slowly squeezes them out of business with dirty tricks.

    The end result is that we don't have real options for true Internet connectivity for home nor on-site business use. I'm already priced out of getting a 10 Mbps upload with a static IP and all ports usable. I suspect within 5 years, I'll be priced out of my current 2 Mbps upload with a static IP and all ports usable.

    I realize the problems that most users are concerned about on this issue relate to their ability to download bytes from third-party companies like Netflix. Therefore, it's all too easy for Verizon to play out this argument as if it's big companies vs. big companies.

    However, the real fallout from the current system is that the cost for personal Internet connectivity that allows individuals equal existence on the network is so high that few bother. The consequence, thus, is that only those who are heavily involved in the technology industry even know what types of applications would be available if everyone had a static IP with all ports usable and equal upload and download speeds of 10 Mbs or higher.

    Yet, that's the exact promise of network connectivity that I was taught about as an undergraduate in Computer Science in the early 1990s. What I see today is the dystopian version of the promise. My generation of computer scientists have been forced to constrain their designs of Internet-enabled applications to fit a model that the network carriers dictate.

    I realize you can't possibly fix all these social ills in the network connectivity industry with one rule-making, but I hope my comments have perhaps given a slightly different perspective of what you'll hear from most of the other commenters on this issue. I thank you for reading my comments and would be delighted to talk further with any of your staff about these issues at your convenience.


    Bradley M. Kuhn,
    a citizen of the USA since birth, currently living in New York, NY.

    Posted on Wednesday 04 June 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-05-14: To Serve Users

    (Spoiler alert: spoilers regarding a 1950s science fiction short story that you may not have read appear in this blog post.)

    Mitchell Baker announced today that Mozilla Corporation (or maybe Mozilla Foundation? She doesn't really say…) will begin implementing proprietary software by default in Firefox at the behest of wealthy and powerful media companies. Baker argues this serves users: that Orwellian phrasing caught my attention most.

    image from Twilight Zone Episode, To Serve Man, showing the book with the alien title on the front and its translation.

    In the old science fiction story, To Serve Man (which later was adapted for the The Twilight Zone), aliens come to earth and freely share various technological advances, and offer free visits to the alien world. Eventually, the narrator, who remains skeptical, begins translating one of their books. The title is innocuous, and even well-meaning: To Serve Man. Only too late does the narrator realize that the book isn't about service to mankind, but rather — a cookbook.

    It's in the same spirit that Baker seeks to serve Firefox's users up on a platter to the MPAA, the RIAA, and like-minded wealthy for-profit corporations. Baker's only defense appears to be that other browser vendors have done the same, and cites specifically for-profit companies such as Apple, Google, and Microsoft.

    Theoretically speaking, though, the Mozilla Foundation is supposed to be a 501(c)(3) non-profit charity which told the IRS its charitable purpose was: to keep the Internet a universal platform that is accessible by anyone from anywhere, using any computer, and … develop open-source Internet applications. Baker fails to explain how switching Firefox to include proprietary software fits that mission. In fact, with a bit of revisionist history, she says that open source was merely an “approach” that Mozilla Foundation was using, not their mission.

    Of course, Mozilla Foundation is actually a thin non-profit shell wrapped around a much larger entity called the Mozilla Corporation, which is a for-profit company. I have always been dubious about this structure, and actions like this that make it obvious that “Mozilla” is focused on being a for-profit company, competing with other for-profit companies, rather than a charity serving the public (at least, in the way that I mean “serving”).

    Meanwhile, I greatly appreciate that various Free Software communities maintain forks and/or alternative wrappers around many web browser technologies, which, like Firefox, succumb easily to for-profit corporate control. This process (such as Debian's iceweasel fork and GNOME's ephiphany interface to Webkit) provide an nice “canary in the coalmine” to confirm there is enough software-freedom-respecting code still released to make these browsers usable by those who care about software freedom and reject the digital restrictions management that Mozilla now embraces. OTOH, the one item that Baker is right about: given that so few people oppose proprietary software, there soon may not be much of a web left for those of us who stand firmly for software freedom. Sadly, Mozilla announced today their plans to depart from curtailing that distopia and will instead help accelerate its onset.

    Related Links:

    Posted on Wednesday 14 May 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-05-10: Federal Appeals Court Decision in Oracle v. Google

    [ Update on 2014-05-13: If you're more of a listening rather than reading type, you might enjoy the Free as in Freedom oggcast that Karen Sandler and I recorded about this topic. ]

    I have a strange relationship with copyright law. Many copyright policies of various jurisdictions, the USA in particular, are draconian at best and downright vindictive at worst. For example, during the public comment period on ACTA, I commented that I think it's always wrong, as a policy matter, for copyright infringement to carry criminal penalties.

    That said, much of what I do in my work in the software freedom movement is enforcement of copyleft: assuring that the primary legal tool, which defends the freedom of the Free Software, functions properly, and actually works — in the real world — the way it should.

    As I've written about before at great length, copyleft functions primarily because it uses copyright law to stand up and defend the four freedoms. It's commonly called a hack on copyright: turning the copyright system which is canonically used to restrict users' rights, into a system of justice for the equality of users.

    However, it's this very activity that leaves me with a weird relationship with copyright. Copyleft uses the restrictive force of copyright in the other direction, but that means the greater the negative force, the more powerful the positive force. So, as I read yesterday the Federal Circuit Appeals Court's decision in Oracle v. Google, I had that strange feeling of simultaneous annoyance and contentment. In this blog post, I attempt to state why I am both glad for and annoyed with the decision.

    I stated clearly after Alsup's decision NDCA decision in this case that I never thought APIs were copyrightable, nor does any developer really think so in practice. But, when considering the appeal, note carefully that the court of appeals wasn't assigned the general job of considering whether APIs are copyrightable. Their job is to figure out if the lower court made an error in judgment in this particular case, and to discern any issues that were missed previously. I think that's what the Federal Circuit Court attempted to do here, and while IMO they too erred regarding a factual issue, I don't think their decision is wholly useless nor categorically incorrect.

    Their decision is worth reading in full. I'd also urge anyone who wants to opine on this decision to actually read the whole thing (which so often rarely happens in these situations). I bet most pundits out there opining already didn't read the whole thing. I read the decision as soon as it was announced, and I didn't get this post up until early Saturday morning, because it took that long to read the opinion in detail, go back to other related texts and verify some details and then write down my analysis. So, please, go ahead, read it now before reading this blog post further. My post will still be here when you get back. (And, BTW, don't fall for that self-aggrandizing ballyhoo some lawyers will feed you that only they can understand things like court decisions. In fact, I think programmers are going to have an easier time reading decisions about this topic than lawyers, as the technical facts are highly pertinent.)

    Ok, you've read the decision now? Good. Now, I'll tell you what I think in detail: (As always, my opinions on this are my own, IANAL and TINLA and these are my personal thoughts on the question.)

    The most interesting thing, IMO, about this decision is that the Court focused on a fact from trial that clearly has more nuance than they realize. Specifically, the Court claims many times in this decision that Google conceded that it copied the declaring code used in the 37 packages verbatim (pg 12 of the Appeals decision).

    I suspect the Court imagined the situation too simply: that there was a huge body of source code text, and that Google engineers sat there, simply cutting-and-pasting from Oracle's code right into their own code for each of the 7,000 lines or so of function declarations. However, I've chatted with some people (including Mark J. Wielaard) who are much more deeply embedded in the Free Software Java world than I am, and they pointed out it's highly unlikely anyone did a blatant cut-and-paste job to implement Java's core library API, for various reasons. I thus suspect that Google didn't do it that way either.

    So, how did the Appeals Court come to this erroneous conclusion? On page 27 of their decision, they write: Google conceded that it copied it verbatim. Indeed, the district court specifically instructed the jury that ‘Google agrees that it uses the same names and declarations’ in Android. Charge to the Jury at 10. So, I reread page 10 of the final charge to the jury. It actually says something much more verbose and nuanced. I've pasted together below all the parts where the Alsup's jury charge mentions this issue (emphasis mine):

    Google denies infringing any such copyrighted material … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. … The copyrighted Java platform has more than 37 API packages and so does the accused Android platform. As for the 37 API packages that overlap, Google agrees that it uses the same names and declarations but contends that its line-by-line implementations are different … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. Google states, however, that the elements it has used are not infringing … With respect to the API documentation, Oracle contends Google copied the English-language comments in the registered copyrighted work and moved them over to the documentation for the 37 API packages in Android. Google agrees that there are similarities in the wording but, pointing to differences as well, denies that its documentation is a copy. Google further asserts that the similarities are largely the result of the fact that each API carries out the same functions in both systems.

    Thus, in the original trial, Google did not admit to copying of any of Oracle's text, documentation or code (other than the rangeCheck thing, which is moot on the API copyrightability issue). Rather, Google said two separate things: (a) they did not copy any material (other than rangeCheck), and (b) admitted that the names and declarations are the same, not because Google copied those names and declarations from Oracle's own work, but because they perform the same functions. In other words, Google makes various arguments of why those names and declarations look the same, but for reasons other than “mundane cut-and-paste copying from Oracle's copyrighted works”.

    For we programmers, this is of course a distinction without any difference. Frankly, programmers, when we look at this situation, we'd make many obvious logical leaps at once. Specifically, we all think APIs in the abstract can't possibly be copyrightable (since that's absurd), and we work backwards from there with some quick thinking, that goes something like this: it doesn't make sense for APIs to be copyrightable because if you explain to me with enough detail what the API has to, such that I have sufficient information to implement, my declarations of the functions of that API are going to necessarily be quite similar to yours — so much so that it'll be nearly indistinguishable from what those function declarations might look like if I cut-and-pasted them. So, the fact is, if we both sit down separately to implement the same API, well, then we're likely going to have two works that look similar. However, it doesn't mean I copied your work. And, besides, it makes no sense for APIs, as a general concept, to be copyrightable so why are we discussing this again?0

    But this is reasoning a programmer can love but the Courts hate. The Courts want to take a set of laws the legislature passed, some precedents that their system gave them, along with a specific set of facts, and then see what happens when the law is applied to those facts. Juries, in turn, have the job of finding which facts are accurate, which aren't, and then coming to a verdict, upon receiving instructions about the law from the Court.

    And that's right where the confusion began in this case, IMO. The original jury, to start with, likely had trouble distinguishing three distinct things: the general concept of an API, the specification of the API, and the implementation of an API. Plus, they were told by the judge to assume API's were copyrightable anyway. Then, it got more confusing when they looked at two implementations of an API, parts of which looked similar for purely mundane technical reasons, and assumed (incorrectly) that textual copying from one file to another was the only way to get to that same result. Meanwhile, the jury was likely further confused that Google argued various affirmative defenses against copyright infringement in the alternative.

    So, what happens with the Appeals Court? The Appeals court, of course, has no reason to believe the finding of fact of the jury is wrong, and it's simply not the appeals court's job to replace the original jury's job, but to analyze the matters of law decided by the lower court. That's why I'm admittedly troubled and downright confused that the ruling from the Appeals court seems to conflate the issue of literal copying of text and similarities in independently developed text. That is a factual issue in any given case, but that question of fact is the central nuance to API copyrightiable and it seems the Appeals Court glossed over it. The Appeals Court simply fails to distinguish between literal cut-and-paste copying from a given API's implementation and serendipitous similarities that are likely to happen when two API implementations support the same API.

    But that error isn't the interesting part. Of course, this error is a fundamental incorrect assumption by the Appeals Court, and as such the primary ruling are effectively conclusions based on a hypothetical fact pattern and not the actual fact pattern in this case. However, after poring over the decision for hours, it's the only error that I found in the appeals ruling. Thus, setting the fundamental error aside, their ruling has some good parts. For example, I'm rather impressed and swayed by their argument that the lower court misapplied the merger doctrine because it analyzed the situation based on the decisions Google had with regard to functionality, rather than the decisions of Sun/Oracle. To quote:

    We further find that the district court erred in focusing its merger analysis on the options available to Google at the time of copying. It is well-established that copyrightability and the scope of protectable activity are to be evaluated at the time of creation, not at the time of infringement. … The focus is, therefore, on the options that were available to Sun/Oracle at the time it created the API packages.

    Of course, cropping up again in that analysis is that same darned confusion the Court had with regard to copying this declaration code. The ruling goes on to say: But, as the court acknowledged, nothing prevented Google from writing its own declaring code, along with its own implementing code, to achieve the same result.

    To go back to my earlier point, Google likely did write their own declaring code, and the code ended up looking the same as the other code, because there was no other way to implement the same API.

    In the end, Mark J. Wielaard put it best when he read the decision, pointing out to me that the Appeals Court seemed almost angry that the jury hung on the fair use question. It reads to me, too, like Appeals Court is slyly saying: the right affirmative defense for Google here is fair use, and that a new jury really needs to sit and look at it.

    My conclusion is that this just isn't a decision about the copyrightable of APIs in the general sense. The question the Court would need to consider to actually settle that question would be: “If we believe an API itself isn't copyrightable, but its implementation is, how do we figure out when copyright infringement has occurred when there are multiple implementations of the same API floating around, which of course have declarations that look similar?” But the court did not consider that fundamental question, because the Court assumed (incorrectly) there was textual cut-and-paste copying. The decision here, in my view, is about a more narrow, hypothetical question that the Court decided to ask itself instead: “If someone textually copies parts of your API implementation, are merger doctrine, scènes à faire, and de minimis affirmative defenses like to succeed?“ In this hypothetical scenario, the Appeals Court claims “such defenses rarely help you, but a fair use defense might help you”.

    However, on this point, in my copyleft-defender role, I don't mind this decision very much. The one thing this decision clearly seems to declare is: “if there is even a modicum of evidence that direct textual copying occurred, then the alleged infringer must pass an extremely high bar of affirmative defense to show infringement didn't occur”. In most GPL violation cases, the facts aren't nuanced: there is always clearly an intention to incorporate and distribute large textual parts of the GPL'd code (i.e., not just a few function declarations). As such, this decision is probably good for copyleft, since on its narrowest reading, this decision upholds the idea that if you go mixing in other copyrighted stuff, via copying and distribution, then it will be difficult to show no copyright infringement occurred.

    OTOH, I suspect that most pundits are going to look at this in an overly contrasted way: NDCA said API's aren't copyrightable, and the Appeals Court said they are. That's not what happened here, and if you look at the situation that way, you're making the same kinds of oversimplications that the Appeals Court seems to have erroneously made.

    The most positive outcome here is that a new jury can now narrowly consider the question of fair use as it relates to serendipitous similarity of multiple API function declaration code. I suspect a fresh jury focused on that narrow question will do a much better job. The previous jury had so many complex issues before them, I suspect that they were easily conflated. (Recall that the previous jury considered patent questions as well.) I've found that people who haven't spent their lives training (as programmers and lawyers have) to delineate complex matters and separate truly unrelated issues do a poor job at such. Thus, I suspect the jury won't hang the second time if they're just considering the fair use question.

    Finally, with regard to this ruling, I suspect this won't become immediate, frequently cited precedent. The case is remanded, so a new jury will first sit down and consider the fair use question. If that jury finds fair use and thus no infringement, Oracle's next appeal will be quite weak, and the Appeals Court likely won't reexamine the question in any detail. In that outcome, very little has changed overall: we'll have certainty that API's aren't copyrightable, as long as any textual copying that occurs during reimplementation is easily called fair use. By contrast, if the new jury rejects Google's fair use defense, I suspect Google will have to appeal all the way to SCOTUS. It's thus going to be at least two years before anything definitive is decided, and the big winners will be wealthy litigation attorneys — as usual.

    0This is of course true for any sufficiently simple programming task. I used to be a high-school computer science teacher. Frankly, while I was successful twice in detecting student plagiarism, it was pretty easy to get false positives sometimes. And certainly I had plenty of student programmers who wrote their function declarations the same for the same job! And no, those weren't the students who plagiarized.

    Posted on Saturday 10 May 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-04-03: Open Source as Last Resort

    “Open Source as Last Resort” appears to be popular this week. First, Canonical, Ltd. will finally liberate UbuntuOne server-side code, but only after abandoning it entirely. Second, Microsoft announced a plan to release its .NET compiler platform, Roslyn, under the Apache License spinning it into an (apparent, based on description) 501(c)(6) organization called the Dot Net Foundation.

    This strategy is pretty bad for software freedom. It gives fodder to the idea that “open source doesn't work”, because these projects are likely to fail (or have already failed) when they're released. (I suspect, although I don't know of any studies on this, that) most software projects, like most start-up organizations, fail in the first five years. That's true if they're proprietary software projects or not.

    But, using code liberation as a last straw attempt to gain interest in a failing codebase only gives a bad name to the licensing and community-oriented governance that creates software freedom. I therefore think we should not laud these sorts of releases, even though they liberate more code. We should call them for what they are: too little, too late. (I said as much in the five year old bug ticket where community members have been complaining that UbuntuOne server-side is proprietary.)

    Finally, a note on using a foundation to attempt to bolster a project community in these cases:

    I must again point out that the type of organization matters greatly. Those who are interested in the liberated .NET codebase should be asking Microsoft if they're going to form a 501(c)(6) or a 501(c)(3) (and I suspect it's the former, which bodes badly).

    I know some in our community glibly dismiss this distinction as some esoteric IRS issue, but it really matters with regard to how the organization treats the community. 501(c)(6) organizations are trade associations who serve for-profit businesses. 501(c)(3)'s serve the public at large. There's a huge difference in their behavior and activities. While it's possible for a 501(c)(3) to fail to serve all the public's interest, it's corruption when they so fail. When 501(c)(6)'s serve only their corporate members' interest, possibly at the detriment to the public, those 501(c)(6) organizations are just doing the job they are supposed to do — however distasteful it is.

    Note: I said “open source” on purpose in this post in various places. I'm specifically saying that term because it's clear these companies actions are not in the spirit of software freedom, nor even inspired therefrom, but are pure and simple strategy decisions.

    Posted on Thursday 03 April 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-03-31: The Change in My Role at Conservancy

    Today, Conservancy announced the addition of Karen Sandler to our management team. This addition to Conservancy's staff will greatly improve Conservancy's ability to help Conservancy's many member projects.

    This outcome is one I've been working towards for a long time. I've focused for at least a year on fundraising for Conservancy in hopes that we could hire a third full-time staffer. For the last few years, I've been doing basically two full-time jobs, since I've needed to give my personal attention to virtually everything Conservancy does. This obviously doesn't scale, so my focus has been on increasing capacity at Conservancy to serve more projects better.

    I (and the entire Board of Directors of Conservancy) have often worried if I were to disappear, leave Conservancy (or otherwise just drop dead), Conservancy might not survive without me. Such heavy reliance on one person is a bug, not a feature, in an organization. That's why I worked so hard to recruit Karen Sandler as Conservancy's new Executive Director. Admittedly, she helped create Conservancy and has been involved since its inception. But, having her full-time on staff is a great step forward: there's no single point of failure anymore.

    It's somewhat difficult for me to relinquish some of my personal control over Conservancy. I have been mostly responsible for building Conservancy from a small unstaffed “thin” fiscal sponsor into a “full-service” fiscal sponsor that provides virtually any work that a Free Software project requests. Much of that has been thanks to my work, and it's tough to let someone else take that over.

    However, handing off the Executive Director position to Karen specifically made this transition easy. Put simply, I trust Karen, and I recruited her personally to take over (one of) my job(s). She really believes in software freedom in the way that I do, and she's taught me at least half the things I know about non-profit organizational management. We've collaborated on so many projects and have been friends and colleagues — through both rough and easy times — for nearly a decade. While I think I'm justified in saying I did a pretty good job as Conservancy's Executive Director, Karen will do an even better job than I did.

    I'm not stepping aside completely from Conservancy management, though. I'm continuing in the role of President and I remain on the Board of Directors. I'll be involved with all strategic decisions for the organization, and I'll be the primary manager for a few of Conservancy's program activities: including at least the non-profit accounting project and Conservancy's license enforcement activities. My primary staff role, however, will now be under the title “Distinguished Technologist” — a title we borrowed from HP. The basic idea behind this job at Conservancy is that my day-to-day work helps the organization understand the technology of Free Software and how it relates to Conservancy's work. As an initial matter, I suspect that my focus for the next few years is going to be the non-profit accounting project, since that's the most urgent place where Free Software is inadequately providing technological solutions for Conservancy's work. (Now, more than ever, I urge you to donate to that campaign, since it will become a major component of funding my day-to-day work. :)

    I'm somewhat surprised that, even in the six hours since this announcement, I've already received emails from Conservancy member project representatives worded as if they expect they won't hear from me anymore. While, indeed, I'll cease to be the front-line contact person for issues related to Conservancy's work, Conservancy and its operations will remain my focus. Karen and I plan a collaborative management style for the organization, so I suspect for many things, Karen will brief me about what's going on and will seek my input. That said, I'm looking forward to a time very soon when most Conservancy management decisions won't primarily be mine anymore. I'm grateful for Karen, as I know that the two of us running Conservancy together will make a great working environment for both of us, and I really believe that she and I as a management team are greater than the sum of our parts.

    Related Links

    Posted on Monday 31 March 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2014-01-26: GCC, LLVM, Copyleft, Companies, and Non-Profits

    [ Please keep in mind in reading this post that while both FSF and Conservancy are mentioned, and that I have leadership roles at both organizations, these opinions on, as always, are my own and don't necessarily reflect the view of FSF and/or Conservancy. ]

    Most people know I'm a fan of RMS' writing about Free Software and I agree with most (but not all) of his beliefs about software freedom politics and strategy. I was delighted to read RMS' post about LLVM on the GCC mailing list on Friday. It's clear and concise, and, as usual, I agree with most (but not all) of it, and I encourage people to read it. Meanwhile, upon reading comments on LWN on this post, I felt the need to add a few points to the discussion.

    Firstly, I'm troubled to see so many developers, including GCC developers, conflating various social troubles in the GCC community with the choice of license. I think it's impossible to deny that culturally, the GCC community faces challenges, like any community that has lasted for so long. Indeed, there's a long political history of GCC that even predates my earliest involvement with the Free Software community (even though I'm now considered an old-timer in Free Software in part because I played a small role — as a young, inexperienced FSF volunteer — in helping negotiate the EGCS fork back into the GCC mainline).

    But none of these politics really relate to GCC's license. The copyleft was about ensuring that there were never proprietary improvements to the compiler, and AFAIK no GCC developers ever wanted that. In fact, GCC was ultimately the first major enforcement test of the GPL, and ironically that test sent us on the trajectory that led to the current situation.

    Specifically, as I've spoken about in my many talks on GPL compliance, the earliest publicly discussed major GPL violation was by NeXT computing when Steve Jobs attempted and failed (thanks to RMS' GPL enforcement work) to make the Objective C front-end to GCC proprietary. Everything for everyone involved would have gone quite differently if that enforcement effort had failed.

    As it stands, copyleft was upheld and worked. For years, until quite recently (in context of the history of computing, anyway), Apple itself used and relied on the Free Software GCC as its primary and preferred Objective C compiler, because of that enforcement against NeXT so long ago. But, that occurrence also likely solidified Jobs' irrational hatred of copyleft and software freedom, and Apple was on a mission to find an alternative compiler — but writing a compiler is difficult and takes time.

    Meanwhile, I should point out that copyleft advocates sometimes conflate issues in analyzing the situation with LLVM. I believe most LLVM developers when they say that they don't like proprietary software and that they want to encourage software freedom. I really think they do. And, for all of us, copyleft isn't a religion, or even a belief — it's a strategy to maximize software freedom, and no one (AFAICT) has said it's the only viable strategy to do that. It's quite possible the strategy of LLVM developers of changing the APIs quickly to thwart proprietarization might work. I really doubt it, though, and here's why:

    I'll concede that LLVM was started with the best of academic intentions to make better compiler technology and share it freely. (I've discussed this issue at some length with Chris Lattner directly, and I believe he actually is someone who wants more software freedom in the world, even if he disagrees with copyleft as a strategy.) IMO, though, the problem we face is exploitation by various anti-copyleft, software-freedom-unfriendly companies that seek to remove every copyleft component from any software stack. Their reasons for pursuing that goal may or may not be rational, but its collateral damage has already become clear: it's possible today to license proprietary improvements to LLVM that aren't released as Free Software. I predict this will become more common, notwithstanding any technical efforts of LLVM developers to thwart it. (Consider, by way of historical example, that proprietary combined works with Apache web server continue to this very day, despite Apache developers' decades of we'll break APIs, so don't keep your stuff proprietary claims.)

    Copyleft is always a trade-off between software freedom and adoption. I don't admonish people for picking the adoption side over the software freedom side, but I do think as a community we should be honest with ourselves that copyleft remains the best strategy to prevent proprietary improvements and forks and no other strategy has been as successful in reaching that goal. And, those who don't pick copyleft have priorities other than software freedom ranked higher in their goals.

    As a penultimate point, I'll reiterate something that Joe Buck pointed out on the LWN thread: a lot of effort was put in to creating a licensing solution that solved the copyleft concerns of GCC plugins. FSF's worry for more than a decade (reaching back into the late 1990s) was that a GCC plugin architecture would allow writing to an output file GCC's intermediate representation, which would, in turn, allow a wholly separate program to optimize the software by reading and writing that file format, and thus circumvent the protections of copyleft. The GCC Runtime Library Exception (GCC RTL Exception) is (in my biased opinion) an innovative licensing solution that solves the problem — the ironic outcome: you are only permitted to perform proprietary optimization with GCC on GPL'd software, but not on proprietary software.

    The problem was that the GCC RTL Exception came too late. While I led the GCC RTL Exception drafting process, I don't take the blame for delays. In fact, I fought for nearly a year to prioritize the work when FSF's outside law firm was focused on other priorities and ignored my calls for urgency. I finally convinced everyone, but the work got done far too late. (IMO, it should have been timed for release in parallel with GPLv3 in June 2007.)

    Finally, I want to reiterate that copyleft is a strategy, not a moral principle. I respect the LLVM developers' decision to use a different strategy for software freedom, even if it isn't my preferred strategy. Indeed, I respect it so much that I supported Conservancy's offer of membership to LLVM in Software Freedom Conservancy. I still hope the LLVM developers will take Conservancy up on this offer. I think that regardless of a project's preferred strategy for software freedom — copyleft or non-copyleft — that it's important for the developers to have a not-for-profit charity as a gathering place for developers, separate from their for-profit employer affiliations.

    Undue for-profit corporate influence is the biggest problem that software freedom faces today. Indeed, I don't know a single developer in our community who likes to see their work proprietarized. Developers, generally speaking, want to share their code with other developers. It's lawyers and business people with dollar signs in their eyes who want to make proprietary software. Those people sometimes convince developers to make trade-offs (which I don't agree with myself) to work on proprietary software (— usually in exchange for funding some of their work time on upstream Free Software). Meanwhile, those for-profit-corporate folks frequently spread lies and half-truths about the copyleft side of the community — in an effort to convince developers that their Free Software projects “won't survive” if those developers don't follow the exact plan The Company proposes. I've experienced these manipulations myself — for example, in April 2013, a prominent corporate lawyer with an interest in LLVM told me to my face that his company would continue spreading false rumors that I'd use LLVM's membership in Conservancy to push the LLVM developers toward copyleft, despite my public statements to the contrary. (Again, for the record, I have no such intention and I'd be delighted to help LLVM be led in a non-profit home by its rightful developer leaders, whichever Open Source and Free Software license they chose.)

    In short, the biggest threat to the future of software has always been for-profit companies who wish to maximize profits by exploiting the code, developers and users while limiting their software freedom. Such companies try every trick in pursuit of that goal. As such, I prefer copyleft as a strategy. However, I don't necessarily admonish those who pick a different strategy. The reason that I encourage membership of non-copylefted projects in Conservancy (and other 501(c)(3) charities) is to give those projects the benefits of a non-profit home that maximize software freedom using the project's chosen strategy, whatever it may be.

    Posted on Sunday 26 January 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2014-01-24: Choosing Software Freedom Costs Money Sometimes

    Apparently, the company that makes my hand lotion brand uses for its coupons. The only way to print a coupon is to use a proprietary software browser plugin called “couponprinter.exe” (which presumably implements some form of “coupon DRM).

    So, as for, I actually have a price, in dollars, that it cost me to avoid proprietary software. Standing up for software freedom cost me $1.50 today. :) I suppose there are some people who would argue in this situation that they have to use proprietary software, but of course I'm not one of them.

    The interesting thing is that this program has a OS X and Windows version, but nothing for iOS and Android/Linux. Now, if they had the latter, it'd surely be proprietary software anyway.

    That said, does have a send a paper copy to a postal address option, and I have ordered the coupon to be sent to me. But it expires 2014-03-31 and I'm out of hand lotion today; thus whether or not I get to use the coupon before expiration is an open question.

    I'm curious to try to order as many copies as possible of this coupon just to see if they implement ARM properly.

    ARM is of course not a canonical acronym to mean what I mean here. I mean “Analog Restrictions Management”, as opposed to the DRM (“Digital Restrictions Management”) that I was mentioned above. I doubt ARM will become a standard acronym for this, given the obvious overloading of ARM TLA, which is already quite overloaded.

    Posted on Friday 24 January 2014 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2013-12-05: Considerations on a non-profit home for your project

    [ This post of mine is cross-posted from Conservancy's blog.]

    I came across this email thread this week, and it seems to me that Node.js is facing a standard decision that comes up in the life of most Open Source and Free Software projects. It inspired me to write some general advice to Open Source and Free Software projects who might be at a similar crossroads0. Specifically, at some point in the history of a project, the community is faced with the decision of whether the project should be housed at a specific for-profit company, or have a non-profit entity behind it instead. Further, project leaders must consider, if they persue the latter, whether the community should form its own non-profit or affiliate with one that already exists.

    Choosing a governance structure is a tough and complex decision for a project — and there is always some status quo that (at least) seems easier. Thus, there will always be a certain amount of acrimony in this debate. I have my own biases on this, since I am the Executive Director of Conservancy, a non-profit home for Open Source and Free Software projects, and because I have studied the issue of non-profit governance for Open Source and Free Software for the last decade. I have a few comments based on that experience that might be helpful to projects who face this decision.

    The obvious benefit of a project housed in a for-profit company is that they'll usually always have more resources to put toward the project — particularly if the project is of strategic importance to their business. The downside is that the company almost always controls the trademark, perhaps controls the copyright to some extent (e.g., by being the sole beneficiary of a very broad CLA or ©AA), and likely has a stronger say in the technical direction of the project. There will also always be “brand conflation” when something happens in the project (Did the project do it, or did the company?), and such is easily observable in the many for-profit-controlled Open Source and Free Software projects.

    By contrast, while a for-profit entity only needs to consider the interests of its own shareholders, a non-profit entity is legally required to balance the needs of many contributors and users. Thus, non-profits are a neutral home for activities of the project, and a neutral place for the trademark to live, perhaps a neutral place to receive CLAs (if the community even wants a CLA, that is), and to do other activities for the project. (Conservancy, for its part, has a list of what services it provides.)

    There's also difference among non-profit options. The primary two USA options for Open Source and Free Software are 501(c)(3)'s (public charities) and 501(c)(6)'s (trade associations). 501(c)(3) public charities must always act in the public good, while 501(c)(6) trade associations act in interest of its paying for-profit members. I'm a fan of the 501(c)(3)-style of non-profit, again, because I help run one. IMO, the choice between the two really depends on whether you want the project run and controlled by a consortium of for-profit businesses, or if you want the project to operate as a public charity focused on advancing the public good by producing better Open Source and Free Software. BTW, the big benefit, IMO, to a 501(c)(3) is that the non-profit only represents the interests of the project with respect to the public good, so IRS prohibits the charity from conflating its motives with any corporate interest (be they single or aggregate).

    If you decide you want a non-profit, there's then the decision of forming your own non-profit or affiliating with an existing non-profit. Folks who say it's easy to start a new non-profit are (mostly) correct; the challenge is in keeping it running. It's a tremendous amount of work and effort to handle the day-to-day requirements of non-profit management, which is why so many Open Source and Free Software projects choose to affiliate or join with an existing non-profit rather than form their own. I'd suggest strongly that the any community look into joining an existing home, in part because many non-profit umbrellas permit the project to later “spin off” to form your own non-profit. Thus, joining an existing entity is not always a permanent decision.

    Anyway, as you've guessed, thinking about these questions is a part of what I do for a living. Thus, I'd love to talk (by email, phone or IRC) with anyone in any Open Source and Free Software community about joining Conservancy specifically, or even just to talk through all the non-profit options available. There are many options and existing non-profits, all with their own tweaks, so if a given community decides it'd like a non-profit home, there's lots to chose from and a lot to consider.

    I'd note finally that the different tweaks between non-profit options deserve careful attention. I often see people commenting that structures imposed by non-profits won't help with what they need. However, not all non-profits have the same type of structures, and they focus on different things. For example, Conservancy doesn't dictate anything regarding specific CLA rules, licensing, development models, and the like. Conservancy generally advises about all the known options, and help the community come to the conclusions it wants and implement them well. The only place Conservancy has strict rules is with regard to the requirements and guidelines the IRS puts forward on 501(c)(3) status. Meanwhile, other non-profits do have strict rules for development models, or CLAs, and the like, which some projects prefer for various reasons.

    Update 2013-12-07: I posted a follow up on Node.js mailing list in the original discussion that inspired me to write the above.

    0BTW, I don't think how a community comes to that crossroads matters that much, actually. At some point in a project's history, this issue is raised, and, at that moment, a decision is before the project.

    Posted on Thursday 05 December 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2013-11-13: The Trade-offs of Unpaid Free Software Labor

    I read with interest Ashe Dryden's blog post entitled The Ethics of Unpaid Labor and the OSS Community0, and I agree with much of it. At least, I agree with Dryden much more than I agree with Hanson's blog post that inspired Dryden's, since Hanson's seems almost completely unaware of the distinctions between Free Software funding in non-profit and for-profit settings, and I think Dryden's criticism that Hanson's view is narrowed by “white-male in a wealthy country” privilege is quite accurate. I think Dryden does understand the distinctions of non-profit vs. for-profit Free Software development, and Dryden's has an excellent discussion on how wealthy and powerful individuals by default have more leisure time to enter the (likely fictional) Free Software development meritocracy via pure volunteer efforts.

    However, I think two key points remain missing in the discussions so far on this topic. Specifically, (a) the issue of license design as it relates to non-monetary compensation of volunteer efforts and (b) developers' goals in using volunteer Free Software labor to bootstrap employment. The two issues don't interrelate that much, so I'll discuss them separately.

    Copyleft Requirements as “Compensation” For Volunteer Contribution

    I'm not surprised that this discussion about volunteer vs. paid labor is happening completely bereft of reference to the licenses of the software in question. With companies and even many individuals so rabidly anti-copyleft recently, I suspect that everyone in the discussion is assuming that the underlying license structure of these volunteer contributions is non-copyleft.

    Strong copyleft's design, however, deals specifically with the problems inherent in uncompensated volunteer labor. By avoiding the possibility of proprietary derivatives, copyleft ensures that volunteer contributions do have, for lack of a better term, some strings attached: the requirement that even big and powerful companies that use the code treat the lowly volunteer contributor as a true equal.

    Companies have resources that allows them to quickly capitalize on improvements to Free Software contributed by volunteers, and thus the volunteers are always at an economic disadvantage. Requiring that the companies share improvements with the community ensures that the volunteers' labor don't go entirely uncompensated: at the very least, the volunteer contributor has equal access to all improvements.

    This phenomenon is in my opinion an argument for why there is less risk and more opportunity for contributors to copylefted codebases. Copyleft allows for some level of opportunity to the volunteer contributor that doesn't necessarily exist with non-copylefted codebases (i.e., the contributor is assured equal access to later improvements), and certainly doesn't exist with proprietary software.

    Volunteer Contribution As Employment Terms-Setting

    An orthogonal issue is this trend that employers use Free Software contribution as a hiring criterion. I've frankly found this trend disturbing for a wholly different reason than those raised in the current discussed. Namely, most employers who hire based on past Free Software contribution don't employ these developers to work on Free Software!

    Free Software is, frankly, in a state of cooption. (Open Source itself, as a concept, is part of that cooption.) As another part of that cooption, teams of proprietary software (or non-released, secret software) developers use methodologies and workflows that were once unique to Free Software. Therefore, these employers want to know if job candidates know those workflows and methodologies so that the employer can pay the developer to stop using those techniques for the good of software freedom and instead use them for proprietary and/or secretive software development.

    When I was in graduate school, one of the reasons I keenly wanted to be a core contributor to Free Software was not to just get paid for any software development, but specifically to gain employment writing software that would be Free Software. In those days, you picked a codebase you liked because you wanted to be employed to work on that upstream codebase. In fact, becoming a core contributor for a widely used copylefted codebase was once commonly a way to ensure you'd have your pick of jobs being paid to work on that codebase.

    These days, most developers, even though they are required to use some Free Software as part of their jobs, usually are assigned work on some non-Free Software that interacts with that Free Software. Thus, the original meme, that began in the early 1990s, of volunteer for a Free Software codebase so you can later get paid to work on it, has recently morphed into volunteer to work on Free Software so you can get a job working on some proprietary software. That practice is a complete corruption and cooption of the Free Software culture.

    All that said, I do agree with Dryden that we should do more funding at the entry-level of Free Software development, and the internships in particular, such as those through the OPW are, as Dryden writes, absolutely essential to solve the obvious problem of under-representation by those with limited leisure time for volunteer contribution. I think such funding is best when it's done as part of a non-profit rather than a for-profit settings, for reasons that would require yet another blog post to explain.

    0Please note that I haven't seen any of the comments on Dryden's blog post or many of the comments that spawned it, because as near as I can tell, I can't use Disqus without installing proprietary software on my computer, through its proprietary Javascript. If someone can tell me how to read Disqus discussions without proprietary Javascript, I'd appreciate it.

    Posted on Wednesday 13 November 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2013-11-08: Canonical, Ltd.'s Trademark Aggression

    I was disturbed to read that Canonical, Ltd.'s trademark aggression, which I've been vaguely aware of for some time, has reached a new height. And, I say this as someone who regularly encourages Free Software projects to register trademarks, and to occasionally do trademark enforcement and also to actively avoid project policies that might lead to naked licensing. Names matter, and Free Software projects should strive to strike a careful balance between assuring that names mean what they are supposed to mean, and also encourage software sharing and modification at the same time.

    However, Canonical, Ltd.'s behavior shows what happens when lawyers and corporate marketing run amok and fail to strike that necessary balance. Specifically, Canonical, Ltd. sent a standard cease and desist (C&D) letter to Micah F. Lee, for running, a site that clearly to any casual reader is not affiliated with Canonical, Ltd. or its Ubuntu® project. In fact, the site is specifically telling you how to undo some anti-privacy stuff that Canonical, Ltd. puts into its Ubuntu, so there is no trademark-governed threat to its Ubuntu branding. Lee fortunately got legal assistance from the EFF, who wrote a letter explaining why Canonical, Ltd. was completely wrong.

    Anyway, this sort of bad behavior is so commonplace by Canonical, Ltd. that I'd previously decided to stop talking about when it reached the crescendo of Mark Shuttleworth calling me a McCarthyist because of my Free Software beliefs and work. But, one comment on Micah's blog inspired me to comment here. Specifically, Jono Bacon, who leads Ubuntu's PR division under the dubious title of Community Manager, asks this insultingly naïve question as a comment on Micah's blog: Did you raise your concerns the team who sent the email?.

    I am sure that Jono knows well what a C&D letter is and what one looks like. I also am sure that he knows that any lawyer would advise Micah to not engage with an adverse party on his own over an issue of trademark dispute without adequate legal counsel. Thus, for Jono to suggest that there is some Canonical, Ltd. “team” that Micah should be talking to not only pathetically conflates Free Software community operations with corporate legal aggression, but also seem like a Canonical, Ltd. employee subtly suggesting that those who receive C&D's from Canonical, Ltd.'s legal departments should engage in discussion without seeking their own legal counsel.

    Free Software projects should get trademarks of their own. Indeed, I fully support that and I encourage for folks interested in this issue to listen to Pam Chestek's excellent talk on the topic at FOSDEM 2013 (which Karen Sandler and I broadcast on Free as in Freedom). However, true Free Software communities don't try to squelch Free Speech that criticizes their projects. It's deplorable that Canonical, Ltd. has an organized campaign between their lawyers and their public relations folks like Jono to (a) send aggressive C&D letters to Free Software enthusiasts who criticize Ubuntu and (b) follow up on those efforts by subtly shaming those who lawyer-up upon receiving that C&D.

    I should finally note that Canonical, Ltd. has an inappropriate and Orwellian predilection for coopting words our community (including the word “community” itself, BTW). Most people don't know that I myself registered the domain name back on 1999-08-06 (when Shuttleworth was still running Thawte) for a group of friends who liked to use the word canonical in the canonical way, and still do so today. However, thanks to Shuttleworth, it's difficult to use canonical in the canonical way anymore in Free Software circles, because Shuttleworth coopted the term and brand-markets on top of it. Ubuntu, for its part, is a word meaning human kindness that Shuttleworth has also coopted for his often unkind activities.

    Update at 16:17 on 2013-11-08: Canonical, Ltd. has posted a response regarding their enforcement action, which claims that their trademark policy is unusually permissive. This is true if the universe is “all trademark policies in the world”, but it is false if the universe is “Open Source and Free Software trademark policies”. Of course, like any good spin doctors, Canonical, Ltd. doesn't actually say this explicitly.

    Similarly, Canonical, Ltd. restates the oft-over-simplified claim that in trademark law a mark owner is expected to protect the authenticity of a trademark otherwise they risk losing the mark. What they don't tell you is why they believe failure to enforce in this specific instance against had specific risk. Why didn't they tell us that?: because it doesn't. I suspect they could have simply asked for the disclaimer that Micah gave them willingly, and that would have satisfied the aforementioned risk adequately.

    Posted on Friday 08 November 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2013-10-07: Using Perl PayPal API on Debian wheezy

    I recently upgraded to Debian wheezy. On, Debian squeeze, I had no problem using the stock Perl module Business::PayPal::API to import PayPal transactions for Software Freedom Conservancy, via the Debian package libbusiness-paypal-api-perl.

    After the wheezy upgrade, something goes wrong and it doesn't work. I reviewed some similar complaints, that seem to relate to this resolved bug, but that wasn't my problem, I don't think.

    I ran strace to dig around and see what was going on. The working squeeeze install did this:

                    select(8, [3], [3], NULL, {0, 0})       = 1 (out [3], left {0, 0})
                    write(3, "SOMEDATA"..., 1365) = 1365
                    rt_sigprocmask(SIG_BLOCK, [ALRM], [], 8) = 0
                    rt_sigaction(SIGALRM, {SIG_DFL, [], 0}, {SIG_DFL, [], 0}, 8) = 0
                    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
                    rt_sigprocmask(SIG_BLOCK, [ALRM], [], 8) = 0
                    rt_sigaction(SIGALRM, {0xxxxxx, [], 0}, {SIG_DFL, [], 0}, 8) = 0
                    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
                    alarm(60)                               = 0
                    read(3, "SOMEDATA", 5)               = 5

    But the same script on wheezy did this at the same point:

                    select(8, [3], [3], NULL, {0, 0})       = 1 (out [3], left {0, 0})
                    write(3, "SOMEDATA"..., 1373) = 1373
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)

    I was pretty confused, and basically I still am, but then I noticed this in the documentation for Business::PayPal::API, regarding SOAP::Lite:

    if you have already loaded Net::SSLeay (or IO::Socket::SSL), then Net::HTTPS will prefer to use IO::Socket::SSL. I don't know how to get SOAP::Lite to work with IO::Socket::SSL (e.g., Crypt::SSLeay uses HTTPS_* environment variables), so until then, you can use this hack: local $IO::Socket::SSL::VERSION = undef;

    That hack didn't work, but I did confirm via strace that on wheezy, IO::Socket::SSL was getting loaded instead of Net::SSL. So, I did this, which was a complete and much worse hack:

                    use Net::SSL;
                    use Net::SSLeay;
                    $ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;
                    # Then:
                    use Business::PayPal::API qw(GetTransactionDetails TransactionSearch);

    … And this incantation worked. This isn't the right fix, but I figured I should publish this, as this ate up three hours, and it's worth the 15 minutes to write this post, just in case someone else tries to use Business::PayPal::API on wheezy.

    I used to be a Perl expert once upon a time. This situation convinced me that I'm not. In the old days, I would've actually figured out what was wrong.

    Posted on Monday 07 October 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2013-09-23: The Dangers VC-Backed “Open Source”

    I'm thankful for Christopher Allan Webber for pointing me at this interesting post from Guillaume Lesniak, the developer of Focal (a once fully GPL'd camera application for Android/Linux), and how he was (IMO) pressured to give a proprietary license to the new CyanogenMod, Inc.

    I mostly think Guillaume's post speaks for itself, and I encourage readers of my blog to read it as well. When I read it, I couldn't help thinking about how this is what Free Software often becomes in the world of “Open Source”. Specifically, VCs, and the companies they back, just absolutely love to say they're doing “Open Source”, but it just goes to show the clear difference between “doing Open Source” and giving users software freedom. These VC-backed companies don't really want to share freedoms with their users: they want to exploit Free Software licenses to market more proprietary software.

    Years ago, I helped get the Replicant project started. I haven't been an active contributor to the project, but I hope that folks can see this is an actual, community-oriented, volunteer-run Free Software alternative firmware based on Android/Linux. In my opinion, any project controlled primarily by one company will likely never be all those things. I urge Cyanogenmod users to switch to Replicant today!

    Posted on Monday 23 September 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2013-06-26: Congratulations to Harald Welte on Another One

    I'd like to congratulate Harald Welte on yet another great decision in the Berlin court, this time regarding a long-known GPL violator called Fantec. There are so many violations of this nature that are of course so trivially easy to find; it's often tough to pick which one to take action on. Harald has done a great job being selective to make good examples of violators.

    Just as a bit of history, I first documented and confirmed the Fantec violation in January 2009, based on this email sent to the BusyBox mailing list. I discovered that the product didn't seem to be regularly on sale in the USA, so it wasn't ultimately part of the lawsuit that Conservancy and Erik Andersen filed in late 2009.

    However, since Fantec products were on sale mostly in Germany, it was a great case for Harald to pursue. I'm not surprised in the least that even three years after I confirmed the violation, found Fantec still out of compliance and was able to take action at that point. It's not surprising either that it took an entire year thereafter to get it resolved. My reaction to that was actually: Darn, that Berlin Court acts fast compared to Courts in the USA. :)

    Posted on Wednesday 26 June 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2013-06-23: Matthew Garrett on Mir

    Matthew Garrett has a good blog post regarding Mir and Canonical, Ltd.'s CLA. I encourage folks to read it; I added a comment there.

    Posted on Sunday 23 June 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2013-04-06: The Punditocracy of Unelected Technocrats

    All this past week, people have been emailing and/or pinging me on IRC to tell me to read the article, The Meme Hustler by Evgeny Morozov. The article is quite long, and while my day-job duties left me TL;DR'ing it for most of the week, I've now read it, and I understand why everyone kept sending me the article. I encourage you not to TL;DR it any longer yourself.

    Morozov centers his criticisms on Tim O'Reilly, but that's not all the article is about. I spend my days walking the Free Software beat as a (self-admitted) unelected politician, and I've encounter many spin doctors, including O'Reilly — most of whom wear the trappings of advocates for software freedom. As Morozov points out, O'Reilly isn't the only one; he's just the best at it. Morozov's analysis of O'Reilly can help us understand these P.T. Barnum's in our midst.

    In 2001, I co-wrote Freedom or Power? with RMS in response to O'Reilly's very Randian arguments (which Morozov discusses). I remember working on that essay for (literally) days with RMS, in-person at the FSF offices (and at his office at MIT), while he would (again, literally) dance around the room, deep in thought, and then run back to the screen where I was writing to suggest a new idea or phrase to add. We both found it was really difficult to craft the right rhetoric to refute O'Reilly's points. (BTW, most people don't know that there were two versions of my and RMS' essay; the original one was published as a direct response to O'Reilly on his own website. One of the reasons RMS and I redrafted as a stand-alone piece was that we saw our original published response actually served to increase uptake of O'Reilly's position. We decided the issue was important enough it needed a piece that would stand on its own indefinitely to defend that key position.)

    Meanwhile, I find it difficult to express more than a decade later how turbulent that time was for hard-core Free Software advocates, and how concerted the marketing campaign against us was. While we were in the middle of the Microsoft's attacks that GPL was an unAmerican cancer, we also had O'Reilly's the freedom that matters is the freedom to pick one's own license meme propagating fast. There were dirty politics afoot at the time, too: this all occurred during the same three-month period when Eric Raymond called me an inmate taking over the asylum. In other words, the spin doctors were attacking software freedom advocates from every side! Morozov's article captures a bit of what it feels like to be on the wrong side of a concerted, organized PR campaign to manipulate public opinion.

    However, I suppose what I like most about Morozov's article is it's the first time I've seen discussed publicly and coherently a rhetorical trick that spin doctors use. Notice when you listen to a pundit at their undue sense of urgency; they invariably act as if what's happening now is somehow (to use a phrase the pundits love): “game changing”. What I typically see is such folks use urgency as a reason to make compromises quickly. Of course, the real goal is a get-rich-(or-famous)-quick scheme for themselves — not a greater cause. The sense of urgency leaves many people feeling that if they don't follow the meme, they'll be left in the dust. A colleague of mine once described this entrancing effect as dream-like, and that desire to stay asleep and keep dreaming is what lets the hustlers keep us under their spell.

    I've admittedly spent more time than I'd like refuting these spin doctors (or, as Morozov also calls them, meme hustlers). Such work seems unfortunately necessary because Free Software is in an important, multi-decade (but admittedly not urgent :) battle of cooption (which, BTW, every social justice movement throughout history has faced). The tide of cooption by spin doctors can be stemmed only with constant vigilance, so I practice it.

    Still, this all seems a cold, academic way to talk about the phenomenon. For these calculating Frank Luntz types, winning is enough; rhetoric, to them, is almost an end in itself (which I guess one might dub “Cicero 2.0”). For those of us who believe in the cause, the “game for the game's sake” remains distasteful because there are real principles at stake for us. Meanwhile, the most talented of these meme hustlers know well that what's a game to them matters emotionally to us, so they use our genuine concern against us at every turn. And, to make it worse, there's more of them out there than most people realize — usually carefully donning the trappings of allies. Kudos to Morozov for reminding us how many of these emperors have no clothes.

    Posted on Saturday 06 April 2013 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2012-12-18: Perl is Free Software's COBOL, and That's Ok!

    In 1991, I'd just gotten my first real programming job for two reasons: nepotism, and a willingness to write code for $12/hour. I was working as a contractor to a blood testing laboratory, where the main development job was writing custom software to handle, process, and do statistical calculations on blood testing results, primarily for paternity testing.

    My father had been a software developer since the early 1970s, and worked as a contractor at this blood lab since the late 1970s. As the calendar had marched toward the early 1990s, technology cruft had collected. The old TI mainframe, once the primary computer, now only had one job left: statistical calculation for paternity testing, written in TI's Pascal. Slowly but surely, the other software had been rewritten and moved to an AT&T 3B2/600 running Unix System VR3.2.3. That latter machine was the first access I had to a real computer, and certainly the first time I had access to Usenet. This changed my life.

    Ironically, even on that 3B2, the accounting system software was written in COBOL. This seemed like “more cruft” to me, but fortunately there was a third-party vendor who handled that software, so I didn't have to program in COBOL.

    I had the good fortune, actually, to help with the interesting problems, which included grokking data from a blood testing machine that dumped a bunch of data in some weird reporting format onto its RS-232 port at the end of every testing cycle. We had to pull the data of that RS-232 interface and load the data in the database. Perl, since it treated regular expressions as first-class citizens, and had all the Unix block device fundamentals baked in as native (for the RS-232 I/O), was the obvious choice.

    After that project, I was intrigued by this programming language that had made the job so easy. My father gave me a copy of the Camel book — which was, at that point, almost hot off the presses. I read it over a weekend and I decided that I didn't really want to program in any other language again. Perl was just 4 years old then; it was a young language — Perl 4 had just been released. I started trying to embed Perl into our database system, but it wasn't designed for embedding into other systems as a scripting language. So, I ended up using Tcl instead for the big project of rewriting the statical calculation software to replace the TI mainframe. After a year or two writing tens of thousands of lines of Tcl, I was even more convinced that I'd rather be writing in Perl. When Perl 5 was released, I switched back to Perl and never really looked back.

    Perl ultimately became my first Free Software community. I lurked on perl5-porters for years, almost always a bit too timid to post, or ever send in a patch. But, as I finished my college degree and went to graduate school, I focused my thesis work on Perl and virtual machines. I went to the Perl conference every year. I was even in the room for the perl5-porters meeting the day after Jon Orwant's staged tantrum, which was the catalyst for the Perl 6 effort. I wrote more than a few RFC's during the Perl 6 specification process. And, to this day, even though I've since done plenty of Python development, too, when I need to program to do something, I open an Emacs buffer and start typing #!/usr/bin/perl.

    Meanwhile, I never did learn COBOL. But, I was amazed to hear that multiple folks who graduated with me eventually got jobs at a health insurance company. The company trained them in COBOL, so that they could maintain COBOL systems all day. Everyone once in a while, I idly search a job site for COBOL. Today, that search is returning 2,338 open jobs. Most developers never hear about it, of course. It's far from the exciting new technology, but it's there, it's needed and it's obviously useful to someone. Indeed, the COBOL standard was just updated 10 years ago, in 2002!

    I notice these days, though, that when I mentioned having done a lot of Perl development in my life, the average Javascript, Python, or Haskell developer looks at me like I looked at my dad when he told me that accounting system was written in COBOL. I'd bet they'd have my same sigh of relief when told that “someone else” maintains that code and they won't have to bother with it.

    Yet, I still know people heavily immersed in the Perl community. Indeed, there is a very active Perl community out there, just like there's an active COBOL community. I'm not active in Perl like I once was, but it's a community of people, who write new code and maintain old code in Perl, and that has value. More importantly, though, (and unlike COBOL), Perl was born on Usenet, and was released as Free Software from the day of its first release, twenty-five years ago today. Perl was born as part of Free Software culture, and it lives on.

    So, I get it now. I once scoffed at the idea that anyone would write in COBOL anymore, as if the average COBOL programmer was some sort of second-class technology citizen. COBOL programmers in 1991, and even today, are surely good programmers — doing useful things for their jobs. The same is true of Perl these days: maybe Perl is finally getting a bit old fashioned — but there are good developers, still doing useful things with Perl. Perl is becoming Free Software's COBOL: an aging language that still has value.

    Perl turns 25 years old today. COBOL was 25 years old in 1984, right at the time when I first started programming. To those young people who start programming today: I hope you'll learn from my mistake. Don't scoff at the Perl programmers. 25 years from now, you may regret scoffing at them as much as I regret scoffing at the COBOL developers. Programmers are programmers; don't judge them because you don't like their favorite language.

    Update (2013-04-12): I posted a comment on Allison Randal's blog about similar issues of Perl's popularity.

    Posted on Tuesday 18 December 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2012-12-14: The Symmetry of My UnAmerican McCarthyist Cancer

    In mid-2001, after working for FSF part-time for the prior year and a half, I'd actually just started working at FSF full-time. I'd recently relocated to Cambridge, MA to work on-site at the FSF offices. The phone started ringing. The aggressive Microsoft attacks had started; the press wanted to know FSF's response. First, Ballmer'd said the GPL was a cancer. Then, Allchin said it was unAmerican1. Then, Bill Gates added (rather pointlessly and oddly) that it was a pac-man that eats up your business. Microsoft even shopped weird talking-points to the press as part of their botched political axe-job on FSF.

    FSF staffing levels have always been small, but FSF was even smaller then. I led a staff of four to respond to the near constant press inquiries for the entire summer. We coordinated speaking engagements for RMS related to the attacks, and got transcripts published. We did all the stuff that you do when the wealthiest corporation in the world decides it wants to destroy a small 501(c)(3) charity that publishes a license that fosters software sharing. From my point of view, I'll admit now that I was, back then, in slightly over my head: this was my first-ever non-software-development job. I was new to politics, new to management, new to just about everything that I needed to do to lead the response to something like that. I learned fast; hopefully it was fast enough.

    The experience made a huge impression on me. I got quickly comfortable to the idea that, if you work for a radical social justice cause, there's always someone powerful attacking your political positions, but if you believe your cause is just and what you're doing is right, you'll survive. I found that good non-profit work is indeed something that just one of us can do against all that money and power trying to crush us into roaches0. Non-profit work really was the dream career I'd always wanted.

    Still, the experience left me permanently distrustful of Microsoft. I've tried to kept an open mind, and watch for potential change in behavior. I admittedly don't think Microsoft became a friend to Free Software in the 11 years since they put me through the wringer during what was almost literally my first day on the job as FSF's Executive Director (a position I ultimately held until 2005). But, I am now somewhat sure Microsoft's executives aren't hatching new plans to kill copyleft every morning anymore. Indeed, I was excited this week to see that my colleagues at the Samba Project acknowledged Microsoft's help in creating documentation that allowed Samba to implement compatibility with Active Directory. Even I have to admit that companies do change, and sometimes a little bit for the better.

    But, companies don't always change for the better. Over an even shorter period, I've watched another company get worse at almost the same rate as Microsoft's improving.

    Specifically, this week, Mark Shuttleworth of Canonical, Ltd. said that those of us who stand strongly against proprietary software device drivers are insecure McCarthyists. I wonder if Mark realized the irony of using the term McCarthyism to refer to the same people who Microsoft called unAmerican just a decade ago.

    I marvel at these shifting winds of politics. These days, the guy out there slurring against copyleft advocates claims to be the biggest promoter of Free Software himself, and in fact built most of his product on the Free Software that is often defended by the people he claims are on a witch-hunt.

    I wrote many blog posts in 2010 critical of Canonical, Ltd. and its policies. Someone asked me in October if I'd stopped because Canonical, Ltd. got better, or if they'd just bought me off. I answered simply, saying, First of all, Mark hasn't shared any of his unfathomable financial wealth with me. But, more importantly, Mark is making enough bad decisions that Canonical, Ltd.'s behavior is now widely criticized, even by the tech press. Others are doing a good enough job pointing out the problems now; I don't have to. Indeed, I'm supportive of RMS' recent comments about Canonical, Ltd. and its Ubuntu project (and RMS surely has a larger microphone than I do, since he's famous). I've also got nothing to add to his well-argued points, so I simply endorse them.

    Nevertheless, I just couldn't let the situation go without commenting. This week, I watched Microsoft (who once ran a campaign to kill FSF's flagship license) do something helpful to Free Software, while also watching Canonical, Ltd. (who has helped write a lot of GPL'd software) pull a page from Microsoft's old playbook to attack GPL advocates. That's got an intriguing symmetry to it. It's not “history repeating itself”, because all the details are different. But, one fact is still exactly the same: The Wealthy sure do like to call us names when it suits them.

    Update 2012-12-15: In addition to my usual comment thread (which has been quite active on this post), there's also a comment thread on Hacker News and also one on reddit about this blog post.

    Update 2012-12-18: Karen Sandler and I discuss some of the issues related to Shuttleworth's comments on Free as in Freedom, Episode 0x36.

    0 Strangely, my head (somewhat-uselessly) still contains now, as it did then, verbatim copies of Dead Kennedys' lyric sheets, so I quoted that easily from memory. Fortunately, I am pretty sure verbatim copying something into your own brain isn't copyright infringement (yet).

    1I realized after reading some of the reddit comments that it might be useful to link here to the essay I wrote at the time of Allchin's comments, called The GNU GPL and the American Dream.

    Posted on Friday 14 December 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2012-12-09: Who Ever Thought APIs Were Copyrightable, Anyway?

    Back in the summer, there was a widely covered story about Judge Alsup's decision regarding copyrightablity in the Oracle v. Google case. Oracle has appealed the verdict so presumably this will enter the news again at some point. I'd been meaning to write a blog post about it since it happened, and also Karen Sandler and I had been planning an audcast to talk about it.

    Karen and I finally released last week our audcast on it, episode 0x35 of FaiF on the subject. Fact of the matter is, as Karen has been pointing out, there actually isn't much to say.

    Meanwhile, the upside in delay in commenting means that I can respond to some of the comments that I've seen in the wake of decision's publication. The most common confusion about Alsup's decision, in my view, comes from the imprecision of programmers' use of the term “API”. The API and the implementation of that API are different. Frankly, in the Free Software community, everyone always assumed APIs themselves weren't copyrightable. The whole idea of a clean-room implementation of something centers around the idea that the APIs aren't copyrighted. GNU itself depends on the fact that Unix's APIs weren't copyrighted; just the code that AT&T wrote to implement Unix was.

    Those who oppose copyleft keep saying this decision eviscerates copyleft. I don't really see how it does. For all this time, Free Software advocates have always reimplemented proprietary APIs from scratch. Even copylefted projects like Wine depend on this, after all.

    But, be careful here. Many developers use the phrase API to mean different things. Implementations of an API are still copyrightable, just like they always have been. Distribution of other people's code that implement APIs still requires their permission. What isn't copyrightable is general concepts like “to make things work, you need a function that returns an int and takes a string as an argument and that function must called Foo”.

    Note: This post has been about the copyright issues in the case. I previously wrote a blog post when Oracle v. Google started, which was mostly about the software patent issues. I think the advice in there for Free Software developers is still pretty useful.

    Posted on Sunday 09 December 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2012-12-03: FOSDEM Legal & Policy Issues DevRoom

    Richard Fontana, Tom Marble, Karen Sandler, and I will reprise our roles as co-coordinators of the Legal and Policy Issues DevRoom for FOSDEM 2013. The CFP for the FOSDEM 2013 Legal & Policy Issues DevRoom is now available, and the deadline for submission is 21 December 2012, about 18 days from now.

    I want to put a very specific call out to a group of people who may not have considered submitting a talk to a track like this before. In particular, if you are a Free Software developer who has ideas about the policy/licensing decisions for your project, then you should consider submitting a proposal.

    The problem we have is that we often hear from lawyers, or licensing pundits like me on these types of tracks. We all have a lot to say about issue of policy or licensing. But, it's the developers who lead these projects who know best what policy issues you face, and what is needed to address those issues.

    I also want to add something my graduate adviser once said to me: At the Master's level, it's sufficient for your thesis just to ask an important and complex question well. Only a PhD-level thesis has to propose answers to such questions. In my view, our track is at the Master's level: talks that ask complex licensing policy questions well, but don't necessarily have all the answers are just the kind of proposals we're seeking.

    Please share this CFP widely. We've got a two-day dev room so there are plenty of slots, and while we can't guarantee acceptance of any specific talk, your job as submitters is to make the job of the co-chairs difficult by having to choose between many excellent talks. We look forward to your submissions!

    Posted on Monday 03 December 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2012-11-29: If You've Got a Problem With Me, Please Contact Me!

    [ I usually write blog posts about high-minded software freedom concepts. This post isn't one of those; it's much more typical personal blog-fare, so please stop reading here if you're looking for a good software freedom essay; just move on to another one of my blog posts if that's what you want. ]

    I heard something really odd today. I was told that a relatively large group of people find me untrustworthy and refuse to work or collaborate with me because of it. I heard this second-hand, and I asked for more details, and the person who told me really doesn't want to be involved any further (and I don't blame that person, because the whole thing is admittedly rather silly, and I'd walk away too if it wasn't personally about me).

    There are people in the world I don't trust too, of course. I always tell them so to their face. I just operate my life in a really transparent way, so if I believe someone is my political opponent, I tell them so. I've written emails to people that say things like: Now that you work for Company Blah, I have to assume you're working against Free Software, because Company Blah has a history of doing so. If someone says something offensive to me, I tell them they've offended me. Sometimes, I clearly say that I am explicitly not forgiving the person, which thus makes it clear that there is a standing issue between us indefinitely. I do occasionally hold a grudge. (Frankly, I doubt people who claim they never hold a grudge, because everyone I've ever met seems to have a grudge against somebody for something.)

    I've been told that I'm not tactful. I always respond with: Of course, I'm not a tactful person. I've made a conscious choice not to change that behavior because, IMO, the other option is to leave people guessing about how you feel about their actions. If I think someone's action is wrong, I tell them I think it's wrong and why. If I think someone's action is good, I thank them for it and ask if I can help in the future. That's not a tactful way to live, I admit, but I believe it's nevertheless an honorable way to live. I'm grateful for the tactful people I know, because I realize they can accomplish things that I can't, but I also point out that there are things that the untactful can accomplish that the tactful can't. For example, only the tactless can point out emperors who wear no clothes.

    Meanwhile, the kinds of backroom (and seemingly tactful) politics that we sometimes see in Free Software have a way of descending into high school drama. I heard from Foo who heard from Bar that you won't be elected class president because nobody likes you. No, I can't say who Bar heard it from. No, I can't tell you exactly why. This immature behavior is, IMO, much worse than being tactless.

    I frankly think those who operate this way should be ashamed of themselves. I'm therefore putting out a public call (which is just a repeat of what I've said privately to people for years): if you have some problem with something I've done, or find my actions at any time untrustworthy, or wrong, or anything else negative, you're welcome to contact me. I get emails almost weekly anyway of people who have issues with something I've said on the Free as in Freedom audcast or somewhere else. I take the time to answer almost everyone who writes to me. I also always tell people that you can keep pinging me until I answer and I won't be offended if you do. Sometimes, I might just write back with the reasons why I decided not to answer you. But, I'll always at least tell you my opinions on what you've said, even if it's just a tactless: I don't think what you're writing about is a major priority and I can't schedule the time to think about it further right now. I challenge others in the Free Software community to also rise up to more transparency in their actions and statements.

    I want to be clear, BTW, there's a difference between being tactless and mean. I work really hard not to be mean; I sometimes fail, and I also work very hard to examine my actions to see if I've crossed the line. I send apologies to people when it becomes apparent that I've been not just tactless but also mean. I have to admit, though, there are plenty of mean people kicking around the Free Software world who owe a bunch of apologies (including some to me), but if you think I owe you an apology, I encourage you to write to me and ask for one. In my tactless style, I'll either give you an apology or tell you why I disagree about why you deserve one. :)

    Finally, I thought hard about whether to “name names” herein. It's surely obvious that a specific situation has inspired my words above, and those who know what this situation is will realize immediately; those that don't will sadly be left wondering what the hell is going on. Still, as disgusted as I am about the backroom politics I'm dealing with at the moment, I think public admonishment of the perpetrators here would cross the line from tactless to mean, so I decided not to cross the line.

    Posted on Thursday 29 November 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2012-11-22: Left Wondering Why VideoLan Relicensed Some Code to LGPL

    I first met the original group of VLC developers at the Solutions GNU/Linux conference in 2001. I had been an employee of FSF for about a year at the time, and I recall they were excited to tell the FSF about the project, and very proud that they'd used FSF's premier and preferred license (at the time): GPLv2-or-later.

    What a difference a decade makes. I'm admittedly sad that VLC has (mostly) finished its process of relicensing some of its code under LGPLv2.1-or-later. While I have occasionally supported relicensing from GPL to LGPL, every situation is different and I think it should be analyzed carefully. In this case, I don't support VideoLan's decision to relicense the libVLC code.

    The main reason to use the LGPL, as RMS put eloquently long ago, is for situations where there are many competitors and developers would face serious difficulty gaining adoption of a strong-copylefted solution. Another more recent reason that I've discovered to move to weaker licenses (and this was the case with Qt) is to normalize away some of the problems of proprietary relicensing. However, neither reason applies to libVLC.

    VLC is the most popular media player for desktop computers. I know many proprietary operating system users who love VLC and it's the first application they download to a new computer. It is the standard for desktop video viewing, and does a wonderful job advocating the value of software freedom to people who live in a primarily proprietary software world.

    Meanwhile, the VideoLan Organization's press statements have been quite vague on their reasons for changing, saying only that this change was motivated to match the evolution of the video industry and to spread the VLC engine as a multi-platform open-source multimedia engine and library. The only argument that I've seen discussed heavily in public for relicensing is ostensibly to address the widely publicized incompatibility of copyleft licensing with various App Store agreements. Yet, those incompatibilities still exist with the LGPL or, indeed, any true copyleft license. The incompatibilities of Apple's terms are so strict that they make it absolutely impossible to comply simultaneously with any copyleft and Apple's terms at the same time. Other similar terms aren't much better, even with Google's Play Store (— its terms are incompatible with any copyleft license if the project has many copyright holders)0.

    So, I'm left baffled: does the VLC community actually believes the LGPL would solve that problem? (To be clear, I haven't seen any official statement where the VideoLAN Organization claims that relicensing will solve that issue, but others speculate that it's the reason.) Regardless, I don't think it's a problem worth solving. The specters of “Application Store” terms and conditions are something to fight against wholly in an uncompromising way. The copyleft licensing incompatibilities with such terms are actually a signaling mechanism to show us that these stores are working against software freedom actively. I hope developers will reject deployment to these application stores entirely.

    Therefore, I'm left wondering what VLC seeks to do here. Do they want proprietary application interfaces that use their core libraries? If so, I'm left wondering why: VLC is already so popular that they could pull adopters toward software freedom by using the strong copyleft of GPL on libVLC. It seems to me they're making a bad trade-off to get only marginally more popular by allowing some proprietary derivatives. OTOH, I guess I should cut my losses on this point and be glad they stuck with any copyleft at all and didn't go all the way to a permissive license.

    Finally, I do think there's one valuable outcome shown by this relicensing effort (which Gerv pointed out first): it is possible to relicense a multi-copyright-held code based. It's a lot of work, but it can be done. It appears to me that VLC did a responsible and reasonable job on that part, even if I disagree strongly with the need for such a job here in the first place.

    Update (2012-11-30): It's been pointed out to me that VLC has relegated certain code from VLC into a library called libVLC, and that's the code that's been relicensed. I've made today changes to the post above to clarify that issue.

    0 If you want to hear more about my views and analysis of application store terms and conditions, please listen to the Application Stores Panel that I was on at FOSDEM 2012, which was broadcast on the audcast, Free as in Freedom.

    Posted on Thursday 22 November 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2012-09-17: GPL Violations Are Still Pretty Common, You Know?

    As I've written about before, I am always amazed when suddenly there is widespread interest in, excitement over, and focus on some particular GPL violation. I've spent most of my adult life working on copyleft compliance issues, so perhaps I've got an overly unique perspective. It's just that I've seen lots of GPL violations every single day since the late 1990s. Even now, copyleft compliance remains a regular part of my monthly work. Even though it's now only one task among many that I work on every day, I'm still never surprised nor shocked by some violation.

    When some GPL violation suddenly becomes a “big story”, it reminds me of celebrity divorces. There are, of course, every single day, hundreds (maybe even thousands) of couples facing the conclusion that their marriage has ended. It's a tragedy for their families, and they'll spend years recovering. The divorce impacts everyone they know: both their families, and all their friends, too. Everyone's life who touches the couple is impacted in some way or other.

    Of course, the same is true personally for celebrities when they divorce. The weird thing is, though, that people who don't even know these celebrities want to read about the divorce and know the details. It's exciting because the media tells us that we really want to know all the details and follow the drama every step of the way. It's disturbing that our culture sympathizes more with the pain of the rich and famous than the pain of our everyday neighbors.

    Like divorce, copyleft violations are very damaging, but failure to comply with the copyleft licenses impacts three specific sets of people who directly touch the issue: the people whose copyright are infringed, the people who infringed the copyrights, and the people who received infringing articles. Everyone else is just a spectator0.

    That said, my heart goes out to ever user who is sold software that they can't study, improve and share. I'm doubly concerned when those people were legally entitled to those rights, and an infringer snatched them away by failing to comply with copyleft licenses. I also have great sympathy for the individual copyright holders who licensed their works under GPL, yet find many infringers ignoring the rather simple and reasonable requirements of GPL.

    But, I don't think gawking has any value. My biggest post-mortem complaint about SCO was not the FUD: that was obviously wrong and we knew the community would prevail. The constant gawking took away time that we could have spent writing more Free Software and doing good work in the software freedom community. So, from time to time, I like to encourage everyone to avoid gawking. (Unless, of course, you're doing it with the GNU implementation of AWK. :)

    So, when you read GPL violation stories, even when they seem novel, remember that they're mundane tragedies. It's good someone's working on it, but they don't necessarily deserve the inordinate attention that they sometimes get.

    Update, morning of 2012-09-18: Someone asked me to state more clearly how I felt about Red Hat's GPL enforcement action against TwinPeaks1. I carefully avoided saying that above last night, but I suppose I'm going to get asked so often that I might as well say. Plus, the answer is actually quite simple: I simply don't know until the action completes. I only believe that GPL enforcement is morally legitimate if compliance with the GPL is paramount above all other goals. I have never seen Red Hat enforce the GPL before, so I don't know the pecking order of their goals. The proof of the pudding is in the eating, and the proof in the enforcement is whether compliance is obtained. In short, if I were the Magic 8-Ball of GPL compliance, I'd say “Reply hazy, ask again later”2.

    0 Obviously, there's a large negative impact that many seemingly “small” GPL violations, in aggregate, will together have on the entire software freedom community. But, I'm examining the point narrowly in the main text above. For example, imagine if the only GPL violation in the history of the world were done by one company, on one individual's copyrights, and only one customer ever purchased the infringing product. While I'd still value pursuit of that violation (and I would even help such a copyright holder pursue the matter), even I'd have to readily admit that the impact on the software freedom community of that one violation is rather limited.

    Indeed, the larger policy impact of violations comes from the aggregate effect. That's why I've long argued that it's important to deal with the giant volume of GPL violations rather than focus on any one specific matter, even if that matter looks like a “big one”. It's just too easy sometimes to think one particular copyright holder, or one particular program, or one particular product deserves an inordinate amount of attention, but such undue focus is likely an outgrowth of familiarity breeding a bit too much contempt. I occasionally temporarily fall into that trap, so it makes me sad when others do as well.

    1 What bugs me most is that I have yet to see a good Twin Peaks parody (ala Twin Beaks) of this whole court case. I suppose I'm just too old; I was in high school when the entire nation was obsessed with David Lynch's one hit TV series.

    2 cf15290cc2481dbeacef75a3b8a87014e056c256a1aa485e8684c8c5f4f77660

    Posted on Monday 17 September 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2012-07-23: I Received a 2012 O'Reilly Open Source Award

    On last Friday 20 July 2012, I received an O'Reilly Open Source Award, in appreciation for my decade of work in Free Software non-profit organizations, including my current daily work at the Software Freedom Conservancy, my work at the FSF (including starting FSF's associate membership program), and for my work creating and defending copyleft licensing, including such things as inventing the idea behind the Affero clause, helping draft AGPLv3, and, more generally, enforcing copyleft.

    I'm very proud of all this work. My obsession with software freedom goes back far into my past, when I downloaded my first copy of GNU Emacs in 1991 from Usenet and my first GNU/Linux distribution, SLS, in 1992, booting for the first time, on the first computer I ever owned, a copy of Linux 0.99pl12.

    I honestly have written a lot less Free Software than I wanted to. I've made a patch here and there over the years to dozens of projects. I was a co-maintainer of the AGPL'd PokerSource system for a while, and I made various (mostly mixed-success) attempts to build a better virtual machine for Perl, which now is done much better than I ever did by the Parrot project.

    Despite the fact that making better software was what enthralled me most, feeling the helplessness of supporting, using and writing proprietary software in my brief for-profit career convinced me that lack of adequate software freedom was the most dangerous social justice problem in the computing community. I furthermore realized that lots of people were ready and willing to write great Free Software, but that few wanted to do the (frankly more boring) work of running non-profit organizations to defend and advance software freedom. Thus, I devoted myself to helping FSF and Conservancy to be successful organizations that could assist in that regard. I'm privileged and proud to continue my service to both of these organizations.

    Being recognized for this work means a great deal to me. Awards have a special meaning for me, because financial success never really mattered much to me, but knowing that I've made a contribution to something greater than myself matters greatly. Receiving an award that indicates that I've succeeded in that regard invigorates me to do even more. So, at this moment of receiving this award, I'd like to thank all of you in the software freedom community who appreciate and support my work. It means a great deal to me that my work has made a positive impact.

    Posted on Monday 23 July 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2012-05-29: Conservancy's Coordinated Compliance Efforts

    As most readers might have guessed, my work at Software Freedom Conservancy has been so demanding in the last few months that I've been unable to blog, although I have kept up (along with my co-host Karen Sandler) releasing new episodes of the Free as in Freedom oggcast.

    Today, Karen and I released a special episode of FaiF (which is merely special because it was released during a week that we don't normally release a show). In it, Karen and I discuss in detail Conservancy's announcement today of its new coordinated compliance program that includes many copyright holders and projects.

    This new program is an outgrowth of the debate that happened over the last few months regarding Conservancy's GPL compliance efforts. Specifically, I noticed that, buried in the FUD over the last four months regarding GPL compliance, there was one key criticism that was valid and couldn't be ignored: Linux copyright holders should be involved in compliance actions on embedded systems. Linux is a central component of such work, and the BusyBox developers agreed wholeheartedly that having some Linux developers involved with compliance would be very helpful. Conservancy has addressed this issue by building a broad coalition of copyright holders in many different projects who seek to work on compliance with Conservancy, including not just Linux and BusyBox, but other projects as well.

    I'm looking forward in my day job to working collaboratively with copyright holders of many different projects to uphold the rights guaranteed by GPL. I'm also elated at the broad showing of support by other Conservancy projects. In addition to the primary group in the announcement (i.e., copyright holders in BusyBox, Samba and Linux), a total of seven other GPL'd and/or LGPL'd projects have chosen Conservancy to handle compliance efforts. It's clear that Conservancy's compliance efforts are widely supported by many projects.

    The funniest part about all this, though, is that while there has been no end of discussion of Conservancy's and other's compliance efforts this year, most Free Software users never actually have to deal with the details of compliance. Requirements of most copyleft licenses like GPL generally trigger on distribution of the software — particularly distribution of binaries. Since most users simply receive distribution of binaries, and run them locally on their own computer, rarely do they face complex issues of compliance. As the GPLv2 says, The act of running the Program is not restricted.

    Posted on Tuesday 29 May 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2012-02-11: Cutting Through The Anti-Copyleft Political Ruse

    I'd like to thank Harald Welte for his reasoned and clear blog post about GPL enforcement which I hope helps to clear up some of the confusions that I also wrote about recently.

    Harald and I appear to agree that all enforcement actions should request, encourage, and pressure companies toward full FLOSS compliance. Our only disagreement, therefore, is on a minor strategy point. Specifically, Harald believes that the “reinstatement of rights lever” shouldn't be used to require compliance on all FLOSS licenses when resolving a violation matter, and I believe such use of that lever is acceptable in some cases. In other words, Harald and I have only a minor disagreement on how aggressively a specific legal tools should be utilized. (I'd also note that given Harald's interpretation of German law, he never had the opportunity to even consider using that tool, whereas it's always been a default tool in the USA.) Anyway, other than this minor side point, Harald and I appear to otherwise be in full in agreement on everything else regarding GPL enforcement.

    Specifically, one key place where Harald and I are in total agreement is: copyright holders who enforce should approve all enforcement strategies. In every GPL enforcement action that I've done in my life, I've always made sure of that. Indeed, even while I'm a very minor copyright holder in BusyBox (just a few patches), I still nevertheless defer to Erik Andersen (who holds a plurality of the BusyBox copyrights) and Denys Vlasenko (who is the current BusyBox maintainer) about enforcement strategy for BusyBox.

    I hope that Harald's post helps to end this silly recent debate about GPL enforcement. I think the overflowing comment pages can be summarized quite succinctly: some people don't like copyleft and don't want it enforced. Others disagree, and want to enforce. I've written before that if you support copyleft, the only logically consistent position is to also support enforcement. The real disagreement here, thus, is one about whether or not people like copyleft: that's an age-old debate that we just had again.

    However, the anti-copyleft side used a more sophisticated political strategy this time. Specifically, copyleft opponents are attempting to scapegoat minor strategy disagreements among those who do GPL enforcement. I'm grateful to Harald for cutting through that ruse. Those of us that support copyleft may have minor disagreements about enforcement strategy, but we all support GPL enforcement and want to see it continue. Copyleft opponents will of course use political maneuvering to portray such minor disagreements as serious policy questions. Copyleft opponents just want to distract the debate away from the only policy question that matters: Is copyleft a good force in the world for software freedom? I say yes, and thus I'm going to keep enforcing it, until there are no developers left who want to enforce it.

    Posted on Saturday 11 February 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2012-02-01: Some Basic Thoughts on GPL Enforcement

    I've had the interesting pleasure the last 36 hours to watch people debate something that's been a major part of my life's work for the last thirteen years. I'm admittedly proud of myself for entirely resisting the urge to dive into the comment threads, and I don't think it would be all that useful to do so. Mostly, I believe my work stands on its own, and people can make their judgments and disagree if they like (as a few have) or speak out about how they support it (as even more did — at least by my confirmation-biased count, anyway :).

    I was concerned, however, that some of the classic misconceptions about GPL enforcement were coming up yet again. I generally feel that I give so many talks (including releasing one as an oggcast) that everyone must by now know the detailed reasons why GPL enforcement is done the way it is, and how a plan for non-profit GPL enforcement is executed.

    But, the recent discussion threads show otherwise. So, over on Conservancy's blog, I've written a basic, first-principles summary of my GPL enforcement philosophy and I've also posted a few comments on the BusyBox mailing list thread, too.

    I may have more to say about this later, but that's it for now, I think.

    Posted on Wednesday 01 February 2012 by Bradley M. Kuhn.

    Comment on this post in this conversation.




  • 2011-12-16: FaiFCast Release, and Submit to FOSDEM Legal & Policy Issues DevRoom

    Today Karen Sandler and I released Episode 0x1E of the Free as in Freedom oggcast (available in ogg and mp3 formats). There are two important things discussed on that oggcast that I want to draw your attention to:

    Submit a proposal for the Legal & Policy Issues DevRoom CFP

    Tom Marble, Richard Fontana, Karen Sandler, and I are coordinating the Legal and Policy Issues DevRoom at FOSDEM 2012. The Call for Participation for the DevRoom is now available. I'd like to ask anyone reading this blog post who has an interest in policy and/or legal issues related to software freedom to submit a talk by Friday 30 December 2011, by emailing <>.

    We only have about six slots for speakers (it's a one-day DevRoom), so we won't be able to accept all proposals. I just wanted to let everyone know that so you don't flame me if you submit and get rejected. Meanwhile, note that our goal is to avoid the “this is what copyrights, trademarks and patents are” introductory talks. Our focus is on complex issues for those already informed about the basics. We really felt that the level of discourse about legal and policy issues at software freedom conferences needs to rise.

    There are, of course, plenty of secret membership clubs 0, even some with their own private conferences, where these sorts of important issues are discussed. I personally seek to move high-level policy discussion and debate out of the secret “old-boys” club backrooms and into a public space where the entire software freedom community can discuss openly important legal and policy questions in the community. I hope this DevRoom is a first step in that direction!

    Issues & Questions List for the Software Freedom Non-Profits Debate

    I've made reference recently to debates about the value of non-profit organizations for software freedom projects. In FaiFCast 0x1E, Karen and I discuss the debate in depth. As part of that, as you'll see in the show notes, I've made a list of issues that I think were fully conflated during the recent debates. I can't spare the time to opine in detail on them right now (although Karen and I do a bit of that in the oggcast itself), but I did want to copy the list over here in my blog, mainly to list them out as issues worth thinking about in a software freedom non-profit:

    • Should a non-profit home decide what technical infrastructure is used for a software freedom project? And if so, what should it be?
    • If the non-profit doesn't provide technological services, should non-profits allow their projects to rely on for-profits for technological or other services?
    • Should a non-profit home set political and social positions that must be followed by the projects? If so, how strictly should they be enforced?
    • Should copyrights be held by the non-profit home of the project, or with the developers, or a mix of the two?
    • Should the non-profit dictate licensing requirements on the project? If so, how many licenses and which licenses are acceptable?
    • Should a non-profit dictate strict copyright provenance requirements on their projects? If not, should the non-profit at least provide guidelines and recommendations?

    This list of questions is far from exhaustive, but I think it's a pretty good start.

    0 Admittedly, I've got a proverbial axe to grind about these secretive membership-only groups, since, for nearly all of them, I'm persona non grata. My frustration level in this reached a crescendo when, during a session at LinuxCon Europe recently, I asked for the criteria to join one such private legal issues discussions group, and I was told the criteria themselves were secret. I pointed out to the coordinators of the forum that this wasn't a particularly Free Software friendly way to run a discussion group, and they simply changed the subject. My hope is that this FOSDEM DevRoom can be a catalyst to start a new discussion forum for legal and policy issues related to software freedom that doesn't have this problem.

    BTW, just to clarify: I'm not talking about FLOSS Foundations as one of these secretive, members-only clubs. While the FLOSS Foundations main mailing list is indeed invite-only, it's very easy to join and the only requirement is: “if you repost emails from this list publicly, you'll probably be taken off the mailing list”. There is no “Chatham House Rule” or other silly, unenforceable, and spend-inordinate-amount-of-times-remembering-how-to-follow rules in place for FLOSS Foundations, but such silly rulesets are now common with these other secretive legal issues meeting groups.

    Finally, I know I haven't named publicly the members-only clubs I'm talking about here, and that's by design. This is the first time I've mentioned them at all in my blog, and my hope is that they'll change their behaviors soon. I don't want to publicly shame them by name until I give them a bit more time to change their behaviors. Also, I don't want to inadvertently promote these fora either, since IMO their very structure is flawed and community-unfriendly.

    Update: Some have claimed incorrectly that the text in the footnote above somehow indicates my unwillingness to follow the Chatham House Rule (CHR). I refuted that on, noting that the text above doesn't say that, and those who think it does have simply misunderstood. My primary point (which I'll now state even more explicitly) is that CHR is difficult to follow, particularly when it is mis-applied to a mailing list. CHR is designed for meetings, which have a clear start time and a finish time. Mailing lists aren't meetings, so the behavior of CHR when applied to a mailing list is often undefined.

    I should furthermore note that people who have lived under CHR for a series of meetings also have similar concerns as mine. For example, Allison Randal, who worked under CHR on Project Harmony noted:

    The group decided to adopt Chatham House Rule for our discussions. … At first glance it seems quite sensible: encourage open participation by being careful about what you share publicly. But, after almost a year of working under it, I have to say I’m not a big fan. It’s really quite awkward sometimes figuring out what you can and can’t say publicly. I’m trying to follow it in this post, but I’ve probably missed in spots. The simple rule is tricky to apply.

    I agree with Allison.

    Posted on Friday 16 December 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2011-11-28: What's a Free Software Non-Profit For?

    Over on Conservancy's blog, I just published a blog post entitled What's a Free Software Non-Profit For?. It responds in part to what was written last week about non-profit homes for Free Software projects.

    Posted on Monday 28 November 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-11-24: No, You Won't See Me on Twitter, Facebook, Linkedin, Google Plus, Google Hangouts, nor Skype

    Most folks outside of technology fields and the software freedom movement can't grok why I'm not on Facebook. Facebook's marketing has reached most of the USA's non-technical Internet users. On the upside, Facebook gave the masses access to something akin to blogging. But, as with most technology controlled by for-profit companies, Facebook is proprietary software. Facebook, as a software application, is written in a mix of server-side software that no one besides Facebook employees can study, modify and share. On the client-side, Facebook is an obfuscated, proprietary software Javascript application, which is distributed to the user's browser when they access Thus, in my view, using Facebook is no different than installing a proprietary binary program on my GNU/Linux desktop.

    Most of the press critical of Facebook has focused on privacy, data mining of users' data on behalf of advertisers, and other types of data autonomy concerns. Such concerns remain incredibly important too. Nevertheless, since the advent of the software freedom community's concerns about network services a few years ago, I've maintained this simple principle, that I still find correct: While I can agree that merely liberating all software for an online application is not a sufficient condition to treat the online users well, the liberation of the software is certainly a necessary condition for the freedom of the users. Releasing freely all code for the online application the first step for freedom, autonomy, and privacy of the users. Therefore, I certainly don't give in myself to running proprietary software on my FaiF desktops. I simply refuse to use Facebook.

    Meanwhile, when Google Plus was announced, I didn't see any fundamental difference from Facebook. Of course, there are differences on the subtle edges: for example, I do expect that Google will respect data portability more than Facebook. However, I expect data mining for advertisers' behalf will be roughly the same, although Google will likely be more subtle with advertising tie-in than Facebook, and thus users will not notice it as much.

    But, since I'm firstly a software freedom activist, on the primary issue of my concern, there is absolutely no difference between Facebook and Google Plus. Google Plus' software is a mix of server-side trade-secret software that only Google employees can study, share, and modify, and a client-side proprietary Javascript application downloaded into the users' browsers when they access the website.

    Yet, in a matter of just a few months, much of the online conversation in the software freedom community has moved to Google Plus, and I've heard very few people lament this situation. It's not that I believe we'll succeed against proprietary software tomorrow, and I understand fully that (unlike me) most people in the software freedom community have important reasons to interact regularly with those outside of our community. It's not that I chastise software freedom developers and activist for maintaining a minimal presence on these services to interact with those who aren't committed to our cause.

    My actual complaint here is that Google Plus is becoming the default location for discussion of software freedom issues. I've noticed because I've recently discovered that I've missed a lot of community conversations that are only occurring on Google Plus. (I've similarly noticed that many of my Free Software contacts spam me to join Linkedin, so I assume something similar is occurring there as well.)

    What's more, I've received more pressure than ever before to sign up for not only Google Plus, but for Twitter, Linkedin, Google Hangout, Skype and other socially-oriented online communication services. Indeed, just in the last ten days, I've had three different software freedom development projects and/or organizations request that I sign up for a proprietary online communication service merely to attend a meeting or conference call. (Update on 2013-02-16: I still get such requests on a monthly basis.) Of course, I refused, but I've not felt peer pressure this strong since I was a teenager.

    Indeed, the advent of proprietary social networking software adds a new challenge to those of us who want to stand firm and resist proprietary software. As adoption of services like Facebook, Twitter, Google Plus, Skype, Linkedin and Google Hangouts increases, those of us who resist using proprietary software will come under ever-increasing peer pressure. Disturbingly, I've found that peer pressure comes not only from folks outside our community, but also from those who have, for years, otherwise been supporters of the software freedom movement.

    When I point out that I use only Free Software, some respond that Skype, Facebook, and Google Plus are convenient and do things that can't be done easily with Free Software currently. I don't argue that point. It's easy to resist Microsoft Windows, or Internet Explorer, or any other proprietary software that is substandard and works poorly. But proprietary software developers aren't necessarily stupid, nor untalented. In fact, proprietary software developers are highly paid to write easy-to-use, beautiful and enticing software (cross-reference Apple, BTW). The challenge the software freedom community faces is not merely to provide alternatives to the worst proprietary software, but to also replace the most enticing proprietary software available. Yet, if FaiF Software developers settle into being users of that enticing proprietary software, the key inspiration for development disappears.

    The best motivator to write great new software is to solve a problem that's not yet solved. To inspire ourselves as FaiF Software developers, we can't complacently settle into use of proprietary software applications as part of our daily workflow. That's why you won't find me on Google Plus, Google Hangout, Facebook, Skype, Linkedin, Twitter or any other proprietary software network service. You can phone with me with SIP, you can read my blog and feed, and chat with me on IRC and XMPP, and those are the only places that I'll be until there's Free Software replacements for those other services. I sometimes kid myself into believing that I'm leading by example, but sadly few in the software freedom community seem to be following.

    Posted on Thursday 24 November 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-11-13: Just Ignore Him; He'll Go Away Eventually.

    One of my favorite verbal exchanges in an episode of The West Wing occurs in S03E08, The Women of Qumar. In the story, after President Bartlet said at a fundraiser: Everything has risks. Your car can drive into a lake and your seatbelt jams, but no one's saying don't wear your seat belt, someone had a car accident while not wearing a seatbelt and filed a lawsuit naming the President as a defendant. Sam, the Deputy Communications Director, thinks the White House should respond preemptively before the story. Toby, the Communication Director, instead ignores Sam and then has this wonderfully deadpan exchange with the President:

    [Toby,] Come with me for a second, would you?
    Sir, it's possible you're going to hear some stuff about seatbelts today. I urge you to ignore it.
    No problem. [changes topic] Are you straightening things out with the Smithsonian?

    I remember when I first watched this episode in late 2001. It expressed to me a cogent and concise fact of press relations: someone may be out there trying to get attention for themselves on a topic related to you with some sophistic argument, but you should sometimes just ignore it.

    With that, I say: Dear readers of my blog, you may have heard some stuff about Edward Naughton again this week. I urge you to ignore it.

    I hope you'll all walk in the shoes of President Bartlet and respond with a “No problem” and change the topic. If you really want to follow this story, just read what I've said before on it; nothing has changed.

    Meanwhile, while Naughton seems to be happy to selectively quote me to support his sophistry, he still hasn't gotten in touch with me to help actually enforce the GPL. It's obvious he doesn't care in the least about the GPL; he just wants to use it inappropriately to attack Android/Linux and Google. There are criticisms that Google and Android/Linux deserve, but none of them relate to the topic of GPL violations.

    Posted on Sunday 13 November 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-11-11: Last Four FaiF Episodes

    Those of you that follow my blog have probably wondered we're I've been. Quite frankly, there is just so much work going on at Conservancy that I have almost had no time to do anything but Conservancy work, eat and sleep. My output on this blog and on surely shows that.

    The one thing that I've kept up with is the oggcast, Free as in Freedom that I co-host with Karen Sandler, and which is produced by Dan Lynch.

    Since I last made a blog post here, Karen, Dan and I released four oggcasts. I'll discuss them here in reverse chronological order:

    In Episode 0x1C, which was released today, we published Karen's interview with Adam Dingle of Yorba. IMO (which is undoubtedly biased), this episode is an important one since it relates to the issues of non-profit organizations in our community who waiting in the 501(c)(3) application queue. This is a detailed and specific follow-up to the issues that Karen and I discussed on FaiF's Episode 0x13.

    In Episode 0x1B, Karen and I discuss in some detail about the work that we've been up to. Both Karen and I are full-time Executive Directors, and the amount of work that job takes always seems insurmountable. Although, after we recorded the episode, I somewhat embarrassingly remembered the Bush/Kerry debate where George W. Bush kept saying his job as president is hard work. It's certainly annoying when a chief executive goes on and on about how hard his job is, so I apologize if I did a little too much of that in Episode 0x1B.

    In Episode 0x1A, Karen and I discussed in detail Steve Jobs' death and the various news coverage about it. The subject is a bit old news now that I write this, but I'm glad we did that episode, since it gave me an opportunity to say everything I wanted to stay about Steve Jobs' life and death.

    In Episode 0x19, we played Karen's interview with Jos Poortvliet, discussed the upgrade, and Karen discussed GNOME 3.2.

    My plan is to at least keep the FaiF oggcast going, and I'm even bugging Fontana that he and I should start an oggcast too. Beyond that, I can't necessarily commit to any other activities outside of that (and my job at Conservancy and volunteer duties at FSF). BTW, I recently attended a few conferences (both LinxCon Europe and the Summer of Code Mentor Summit). At both of them, multiple folks asked me why I haven't been blogging more. I appreciate people's interest in what I'm writing, but at the moment, my day-job at Conservancy and volunteer work at FSF has had to take absolute priority.

    Based on the ebb and flow (yes, that's the first time I've actually used that phrase on my blog :) of the Free Software community that I've gotten used to over the last decade and a half, I usually find that things slow down in mid-December until mid-January. Since Conservancy's work is based on the needs of its Free Software projects, I'll likely be able to return a “normal” 50 hour work week (instead of the 60-70 I've been doing lately) in December. Thus, I'll probably try to write some queued blog posts then to slowly push out over the few months that follow.

    Finally, I want to mention that Conservancy has an donation appeal up on its website. I hope you'll give generously to support Conservancy's work. On that, I'll just briefly mention my “hard work” again, to assure you that donors to Conservancy definitely get their money's worth when I'm on the job. Since I'm on the topic of that, I also thank everyone who has donated to FSF and Conservancy over the years. I've been fortunate to have worked full-time at both organizations, and I appreciate the community that has supported all that work over the years.

    Posted on Friday 11 November 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2011-08-21: Desktop Summit 2011

    I realize nearly ten days after the end of a conference is a bit late to blog about it. However, I needed some time to recover my usual workflow, having attended two conferences almost back-to-back, OSCON 2011 and Desktop Summit. (The strain of the back-to-back conferences, BTW, made it impossible for me to attend Linux Con North America 2011, although I'll be at Linux Con Europe. I hope next year's summer conference schedule is not so tight.)

    This was my first Desktop Summit, as I was unable to attend the first one in Grand Canaria two years ago. I must admit, while it might be a bit controversial to say so, that I felt the conference was still like two co-located conferences rather than one conference. I got a chance to speak to my KDE colleagues about various things, but I ended up mostly attending GNOME talks and therefore felt more like I was at GUADEC than at a Desktop Summit for most of the time.

    The big exception to that, however, was in fact the primary reason I was at Desktop Summit this year: to participate in a panel discussion with Mark Shuttleworth and Michael Meeks (who gave the panel a quick one-sentence summary on his blog). That was plenary session and the room was filled with KDE and GNOME developers alike, all of whom seemed very interested in the issue.

    Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

    The panel format was slightly frustrating — primarily due to Mark's insistence that we all make very long open statements — although Karen Sandler nevertheless did a good job moderating it and framing the discussion.

    I get the impression most of the audience was already pretty well informed about all of our positions, although I think I shocked some by finally saying clearly in a public forum (other than that I have been lobbying FSF to make copyright assignment for FSF-assigned projects optional rather than mandatory. Nevertheless, we were cast well into our three roles: Mark, who wants broad licensing control over projects his company sponsors so he can control the assets (and possibly sell them); Michael, who has faced so many troubles in the debacle that he believes inbound=outbound can be The Only Way; and me, who believes that copyright assignment is useful for non-profits willing to promise to do the public good to enforce the GPL, but otherwise is a Bad Thing.

    Lydia tells me that the videos will be available eventually from Desktop Summit, and I'll update this blog post when they are so folks can watch the panel. I encourage everyone concerned about the issue of rights transfers from individual developers to entities (be they via copyright assignment or other broad CLA means) to watch the video once it's available. For the moment, Jake Edge's LWN article about the panel is a pretty good summary.

    My favorite moment of the panel, though, was when Shuttleworth claimed he was but a distant observer of Project Harmony. Karen, as moderator, quickly pointed out that he was billed as Project Harmony's originator in the panel materials. It's disturbing that Shuttleworth thinks he can get away with such a claim: it's a matter of public record, that Amanda Brock (Canonical, Ltd.'s General Counsel) initiated Project Harmony, led it for most of its early drafts, and then Canonical Ltd. paid Mark Radcliffe (a lawyer who represents companies that violate the GPL) to finish the drafting. I suppose Shuttleworth's claim is narrowly true (if misleading) since his personal involvement as an individual was only tangential, but his money and his staff were clearly central: even now, it's led by his employee, Allison Randal. If you run the company that runs a project, it's your project: after all, doesn't that fit clearly with Shuttleworth's suppositions about why he should be entitled to be the recipient of copyright assignments and broad CLAs in the first place?

    The rest of my time at Desktop Summit was more as an attendee than a speaker. Since I'm not desktop or GUI developer by any means, I mostly went to talks and learned what others had to teach. I was delighted, however, that no less than six people came up to me and said they really liked this blog. It's always good to be told that something you put a lot of volunteer work into is valuable to at least a few people, and fortunately everyone on the Internet is famous to at least six people. :)

    Sponsored by the GNOME Foundation!

    Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to Desktop Summit 2011, as they did last year for GUADEC 2010. Given my own work and background, I'm very appreciative of a non-profit with limited resources providing travel funding for conferences. It's a big expense, and I'm thankful that the GNOME Foundation has funded my trips to their annual conference.

    BTW, while we await the videos from Desktop Summit, there's some “proof” you can see that I attended Desktop Summit, as I appear in the group photo, although you'll need to view the hi-res version and scroll to the lower right of the image, and find me. I'm in the second/third (depending on how you count) row back, 2-3 from the right, and two to the left from Lydia Pintscher.

    Finally, I did my best to live dent from the Desktop Summit 2011. That might be of interest to some as well, for example, if you want to dig back and see what folks said in some of the talks I attended. There was also a two threads after the panel that may be of interest.

    Posted on Sunday 21 August 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-08-18: Will Nokia Ever Realize Open Source Is Not a Panacea?

    I was pretty sure there was something wrong with the whole thing in fall of 2009, when they first asked me. A Nokia employee contacted me to ask if I'd be willing to be a director of the Symbian Foundation (or so I thought that's what they were asking — read on). I wrote them a thoughtful response explaining my then-current concerns about Symbian:

    • the poor choice of the Eclipse Public License for the eventual code,
    • the fact that Symbian couldn't be built in any software freedom system environment, and
    • that the Symbian source code that had been released thus far didn't actually run on any existing phones.

    I nevertheless offered to serve as a director for one year, and I would resign at that point if the problems that I'd listed weren't resolved.

    I figured that was quite a laundry list. I also figured that they probably wouldn't be interested anyway once they saw my list. Amusingly, they still were. But then, I realized what was really going on.

    In response to my laundry list, I got back a rather disturbing response that showed a confusion in my understanding. I wasn't being invited to join the board of the Symbian Foundation. They had asked me instead to serve as a Director of a small USA entity (that they heralded as Symbian DevCo) that would then be permitted one Representative of the Symbian Foundation itself, which was, in turn, a trade association controlled by dozens of proprietary software companies.

    In fact, this Nokia employee said that they planned to channel all individual developers toward this Symbian DevCo in the USA, and that would be the only voice these developers would have in the direction of Symbian. It would be one tiny voice against dozens of proprietary software company who controlled the real Symbian Foundation, a trade association.

    Anyone who has worked in the non-profit sector, or even contributed to any real software freedom project can see what's deeply wrong there. However, my response wasn't to refuse. I wrote back and said clearly why this was failing completely to create a software freedom community that could survive vibrantly. I pointed out the way the Linux community was structured: whereby the Linux Foundation is a trade association for companies — and, while they do fund Linus' salary, they don't control his or any other activities of developers. Meanwhile, the individual Linux developers have all the real authority: from community structure, to licensing, to holding copyrights, to technical decision-making. I pointed out if they wanted Symbian to succeed, they should emulate Linux as much as they could. I suggested Nokia immediately change the whole structure to have developers in charge of the project, and have a path for Symbian DevCo to ultimately be the primary organization in charge of the codebase, while Symbian Foundation could remain the trade association, roughly akin to the Linux Foundation. I offered to help them do that.

    You might guess that I never got a reply to that email. It was thus no surprise to me in the least what happened to Symbian after that:

    So, within 17 months of Symbian Foundation's inquiry to ask me to help run Symbian DevCo, the (Open Source) Symbian project was canceled entirely, the codebase was now again proprietary (with a few of the old codedumps floating around on other sites), and the Symbian Foundation consists only of a single webpage filled with double-speak.

    Of course, even if Nokia had tried its hardest to build an actual software freedom community, Symbian still had a good chance of failing, as I pointed out in March 2010. But, if Nokia had actually tried to release control and let developers have some authority, Symbian might have had a fighting chance as Free Software. As it turned out, Nokia threw some code over the wall, gave all the power to decide what happens to a bunch of proprietary software companies, and then hung it all out to dry. It's a shining example of how to liberate software in a way that will guarantee its deprecation in short order.

    Of course, we now know that during all this time, Nokia was busy preparing a backroom deal that would end its always-burgeoning-but-never-complete affiliation with software freedom by making a deal with Microsoft to control the future of Nokia. It's a foolish decision for software freedom; whether it's a good business decision surely isn't for me to judge. (After all, I haven't worked in the for-profit sector for fifteen years for a reason.)

    It's true that I've always given a hard time to Maemo (and to MeeGo as well). Those involved from inside Nokia spent the last six months telling me that MeeGo is run by completely different people at Nokia, and Nokia did recently launch yet another MeeGo based product. I've meanwhile gotten the impression that Nokia is one of those companies whose executives are more like wealthy Romans who like to pit their champions against each other in the arena to see who wins; Nokia's various divisions appear to be in constant competition with each other. I imagine someone running the place has read too much Ayn Rand.

    Of course, it now seems that MeeGo hasn't, in Nokia's view, “survived as the fittest”. I learned today (thanks to jwildeboer) that, In Elop's words, there is no returning to MeeGo, even if the N9 turns out to be a hit. Nokia's commitment to Maemo/MeeGo, while it did last at least four years or so, is now gone too, as they begin their march to Microsoft's funeral dirge. Yet another FLOSS project Nokia got serious about, coordinated poorly, and yet ultimately gave up.

    Upon considering Nokia's bad trajectory, it led me to think about how Open Source companies tend to succeed. I've noticed something interesting, which I've confirmed by talking to a lot of employees of successful Open Source companies. The successful ones — those that get something useful done for software freedom while also making some cash (i.e., the true promise of Open Source) — let the developers run the software projects themselves. Such companies don't relegate the developers into a small non-profit that has to lobby dozens of proprietary software companies to actually make an impact. They don't throw code over the wall — rather, they fund developers who make their own decisions about what to do in the software. Ultimately, smart Open Source companies treat software freedom development like R&D should be treated: fund it and see what comes out and try to build a business model after something's already working. Companies like Nokia, by contrast, constantly put their carts in front of all the horses and wonder why those horses whinny loudly at them but don't write any code.

    Open Source slowly became a fad during the DotCom era, and it strangely remains such. A lot of companies follow fads, particularly when they can't figure what else to do. The fad becomes a quick-fix solution. Of course, for those of us that started as volunteers and enthusiasts in 1991 or earlier, software freedom isn't some new attraction at P. T. Barnum's circus. It's a community where we belong and collaborate to improve society. Companies are welcomed to join us for the ride, but only if they put developers and users in charge.

    Meanwhile, my personal postscript to my old conversation with Nokia arrived in my inbox late in May 2011. I received a extremely vague email from a lawyer at Nokia. She wanted really badly to figure out how to quickly dump some software project — and she wouldn't tell me what it was — into the Software Freedom Conservancy. Of course, I'm sure this lawyer knows nothing about the history of the Symbian project wooing me for directorship of Symbian DevCo and all the other history of why “throwing code over the wall” into a non-profit is rarely known to work, particularly for Nokia. I sent her a response explaining all the problems with her request, and, true to Nokia's style, she didn't even bother to respond to me thanking me for my time.

    I can't wait to see what project Nokia dumps over the wall next, and then, in another 17 months (or if they really want to lead us on, four years), decides to proprietarize or abandon it because, they'll say, this open-sourcing thing just doesn't work. Yet, so many companies make money with it. The short answer is: Nokia, you keep doing it wrong!

    Update (2011-08-24): Boudewijn Rempt argued another side of this question. He says the Calligra suite is a counterexample of Nokia getting a FLOSS project right. I don't know enough about Calligra to agree or disagree.

    Posted on Thursday 18 August 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-08-15: If Only They'd Actually Help Enforce GPL

    Unfortunately, Edward Naughton is at it again, and everyone keeps emailing me about, including Brian Proffitt, who quoted my email response to him this morning in his article.

    As I said in my response to Brian, I've written before on this issue and I have nothing much more to add. Naughton has not identified a GPL violation that actually occurred, at least with respect to Google's own distribution of Android, and he has completely ignored my public call for him to make such a formal report to the copyright holders of GPL violations for which he has evidence (if any).

    Jon Corbet of LWN has also picked up the story, mostly pontificating on what it would mean if loss of distribution rights under GPLv2§4 are used nefariously instead of the honorable way it has been hitherto used to defend software freedom. I commented on the LWN post.

    I think Jon's right to raise that specific concern, and that's a good reason for projects to upgrade to GPLv3. But, nevertheless, this whole thing is not even relevant until someone actually documents a real GPL violation that has occurred. As I previously mentioned, I'm aware of plenty of documented violations (thanks to Matthew Garrett), and I'd love if more people were picking up and act on these violations to enforce the GPL. I again tell Naughton: if you are seriously concerned about enforcing GPL, then volunteer your time as a lawyer to help. But we all know that's not really what interests you: rather, your job is to spread FUD.

    Posted on Monday 15 August 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-08-05: You're Living in the Past, Dude!

    At the 2000 Usenix Technical Conference (which was the primary “generalist” conference for Free Software developers in those days), I met Miguel De Icaza for the third time in my life. In those days, he'd just started Helix Code (anyone else remember what Ximian used to be called?) and was still president of the GNOME Foundation. To give you some context: Bonobo was a centerpiece of new and active GNOME development then.

    Out of curiosity and a little excitement about GNOME, I asked Miguel if he could show me how to get the GNOME 1.2 running on my laptop. Miguel agreed to help, quickly taking control of the keyboard and frantically typing and editing my sources.list.

    Debian potato was the just-becoming-stable release in those days, and of course, I was still running potato (this was before my experiment with running things from testing began).

    After a few minutes hacking on my keyboard, Miguel realized that I wasn't running woody, Debian's development release. Miguel looked at me, and said: You aren't running woody; I can't make GNOME run on this thing. There's nothing I can do for you. You're living in the past, dude!. (Those who know Miguel IRL can imagine easily how he'd sound saying this.)

    So, I've told that story many times for the last eleven years. I usually tell it for laughs, as it seems an equal-opportunity humorous anecdote. It pokes some fun at Miguel, at me, at Debian for its release cycle, and also at GNOME (which has, since its inception, tried to never live in the past, dude).

    Fact is, though, I rather like living in the past, at least with regard to my computer setup. By way of desktop GUIs, I used twm well into the late 1990s, and used fvwm well into the early 2000s. I switched to sawfish (then sawmill) during the relatively brief period when GNOME used it as its default window manager. When Metacity became the default, I never switched because I'd configured sawfish so heavily.

    In fact, the only actual parts of GNOME 2 that I ever used on a daily basis have been (a) a small unobtrusive panel, (b) dbus (and its related services), and (c) the Network Manager applet. When GNOME 3 was released, I had no plans to switch to it, and frankly I still don't.

    I'm not embarrassed that I consistently live in the past; it's sort of the point. GNOME 3 isn't for me; it's for people who want their desktop to operate in new and interesting ways. Indeed, it's (in many ways) for the people who are tempted to run OSX because its desktop is different than the usual, traditional, “desktop metaphor” experience that had been standard since the mid-1990s.

    GNOME 3 just wasn't designed with old-school Unix hackers in mind. Those of us who don't believe a computer is any good until we see a command line aren't going to be the early adopters who embrace GNOME 3. For my part, I'll actually try to avoid it as long as possible, continue to run my little GNOME 2 panel and sawfish, until slowly, GNOME 3 will seep into my workflow the way the GNOME 2 panel and sawfish did when they were current, state-of-the-art GNOME technologies.

    I hope that other old-school geeks will see this distinction: we're past the era when every Free Software project is targeted at us hackers specifically. Failing to notice this will cause us to ignore the deeper problem software freedom faces. GNOME Foundation's Executive Director (and my good friend), Karen Sandler, pointed out in her OSCON keynote something that's bothered her and me for years: the majority computer at OSCON is Apple hardware running OSX. (In fact, I even noticed Simon Phipps has one now!) That's the world we're living in now. Users who actually know about “Open Source” are now regularly enticed to give up software freedom for shiny things.

    Yes, as you just read, I can snicker as quickly as any old-school command-line geek (just as Linus Torvalds did earlier this week) at the pointlessness of wobbly windows, desktop cubes, and zoom effects. I could also easily give a treatise on how I can get work done faster, better, and smarter because I have the technology of years ago that makes every keystroke matter.

    Notwithstanding that, I'd even love to have the same versatility with GNOME 3 that I have with sawfish. And, if it turns out GNOME 3's embedded Javascript engine will give me the same hackability I prefer with sawfish, I'll adopt GNOME 3 happily. But, no matter what, I'll always be living in the past, because like every other human, I hate changing anything, unless it's strictly necessary or it's my own creation and derivation. Humans are like that: no matter who you are, if it wasn't your idea, you're always slow to adopt something new and change old habits.

    Nevertheless, there's actually nothing wrong with living in the past — I quite like it myself. However, I'd suggest that care be taken to not admonish those who make a go at creating the future. (At this risk of making a conclusion that sounds like a time travel joke,) don't forget that their future will eventually become that very past where I and others would prefer to live.

    Posted on Friday 05 August 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2011-07-29: GNU Emacs Developers Will Fix It; Please Calm Down

    fabsh was the first to point me at a slashdot story that is (like most slashdot stories) sensationalized.

    The story, IMO, makes the usual mistake of considering a GPL violation as an earth-shattering disaster that has breached the future of software freedom. GPL violations vary in degree of the problems they create; most aren't earth-shattering.

    Specifically, the slashdot story points to a thread on the emacs-devel mailing list about a failure to include some needed bison grammar in the complete and corresponding sources for Emacs in a few Emacs releases in the last year or two. As you can see there, RMS quickly responded to call it a grave problem … [both] legally and ethically, and he's asked the Emacs developers to help clear up the problem quickly.

    I wrote nearly two years ago that one shouldn't jump to conclusions and start condemning those who violate the GPL without investigating further first. Most GPL violations are mistakes, as this situation clearly was, and I suspect it will be resolved within a few news cycles of this blog post.

    And please, while we all see the snickering-inducing irony of FSF and its GNU project violating the GPL, keep in mind that this is what I've typically called a “community violation”. It's a non-profit volunteer project that made an honest mistake and is resolving it quickly. Meanwhile, I've a list of hundreds of companies who are actively violating the GPL, ignoring users who requested source, and have apparently no interest in doing the right thing until I open an enforcement action against them. So, please keep perspective about what how bad any given violation is. Not all GPL violations are of equal gravity, but all should be resolved, of course. The Emacs developers are on it.

    Posted on Friday 29 July 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-07-07: Project Harmony (and “Next Generation Contributor Agreements”) Considered Harmful

    Update on 2014-06-10:While this article is about a specific series of attempts to “unify” CLAs and ©AAs into a single set of documents, the issues raised below cover the gamut of problems that are encountered in many CLAs and ©AAs in common use today in FLOSS projects. Even though it appears that both Project Harmony and its reincarnation Next Generation Contributor Agreements have both failed, CLAs and ©AAs are increasing in popularity among FLOSS projects, and developers should begin action to oppose these agreements for their projects.

    Update on 2013-09-05: Project Harmony was recently relaunched under the name the Next Generation of Contributor Agreements. AFAICT, it's been publicly identified as the same initiative, and its funding comes from the same person. I've verified that everything I say below still applies to their current drafts available from the Contributor Agreements project. I also emailed this comments to the leaders of that project before it started, but they wouldn't respond to my policy questions.

    Much advertising is designed to convince us to buy or use of something that we don't need. When I hear someone droning on about some new, wonderful thing, I have to worry that these folks are actually trying to market something to me.

    Very soon, you're likely to see a marketing blitz for this thing called Project Harmony (which just released its 1.0 version of document templates). Even the name itself is marketing: it's not actually descriptive, but is so named to market a “good feeling” about the project before even knowing what it is. (It's also got serious namespace collision, including with a project already in the software freedom community.)

    Project Harmony markets itself as fixing something that our community doesn't really consider broken. Project Harmony is a set of document templates, primarily promulgated and mostly drafted by corporate lawyers, that entice developers to give control of their software work over to companies.

    My analysis below is primarily about how these agreements are problematic for individual developers. An analysis of the agreements in light of companies or organizations using them between each other may have the same or different conclusions; I just haven't done that analysis in detail so I don't know what the outcome is.

    [ BTW, I'm aware that I've failed to provide a TL;DR version of this article. I tried twice to write one and ultimately decided that I can't. Simply put, these issues are complex, and I had to draw on a decade of software freedom licensing, policy, and organizational knowledge to fully articulate what's wrong with the Project Harmony agreements. I realize that sounds like a It was hard to write — it should be hard to read justification, but I just don't know how to summarize these Gordian problems in a pithy way. I nevertheless hope developers will take the time to read this before they sign a Project Harmony agreement, or — indeed — any CLA or ©AA. ]

    Copyright Assignment That Lacks Real Assurances

    First of all, about half of Project Harmony is copyright assignment agreements ( ©AAs). Assigning copyright completely gives the work over to someone else. Once the ©AA is signed, the work ceases to belong to the assignor. It's as if that work was done by the assignee. There is admittedly some value to copyright assignment, particularly if developers want to ensure that the GPL or other copyleft is enforced on their work and they don't have time to do it themselves. (Although developers can also designate an enforcement agent to do that on their behalf even if they don't assign copyright, so even that necessity is limited.)

    One must immensely trust an assignee organization. Personally, I've only ever assigned some of my copyrights to one organization in my life: the Free Software Foundation, because FSF is the only organization I ever encountered that is institutionally committed to DTRT'ing with copyrights in a manner similar to my personal moral beliefs.

    First of all, as I've written about before, FSF's ©AA make all sorts of promises back to the assignor. Second, FSF is institutionally committed to the GPL and enforcing GPL in a way that advances FSF's non-profit advocacy mission for software freedom. All of this activity fits my moral principles, so I've been willing to sign FSF's ©AAs.

    Yet, I've nevertheless met many developers who refuse to sign FSF's ©AAs. While many of such developers like the GPL, they don't necessarily agree with the FSF's moral positions. Indeed, in many cases, developers are completely opposed to assigning copyright to anyone, FSF or otherwise. For example, Linus Torvalds, founder of Linux, has often stated on record that he never wanted to do copyright assignments, for several reasons: [he] think[s] they are nasty and wrong personally, and [he]'d hate all the paperwork, and [he] thinks it would actually detract from the development model.

    Obviously, my position is not as radical as Linus'; I do think ©AAs can sometimes be appropriate. But, I also believe that developers should never assign copyright to a company or to an organization whose moral philosophy doesn't fit well with their own.

    FSF, for its part, spells out its moral position in its ©AA itself. As I've mentioned elsewhere, and as Groklaw recently covered in detail, FSF's ©AA makes various legally binding promises to developers who sign it. Meanwhile, Project Harmony's ©AAs, while they put forward a few options that look vaguely acceptable (although they have problems of their own discussed below), make no such promises mandatory. I have often times pointed Harmony's drafters to the terms that FSF has proposed should be mandatory in any for-profit company's ©AA, but Harmony's drafters have refused to incorporate these assurances as a required part of Harmony's agreements. (Note that such assurances would still be required for the CLA options as well; see below for details why.)

    Regarding ©AAs, I'd like to note finally that FSF does not require ©AAs for all GNU packages. This confusion is so common that I'd like to draw attention to it, even thought it's only a tangential point in this context. FSF's ©AA is only mandatory, to my knowledge, on those GNU packages where either (a) FSF employees developed the first versions or (b) the original developers themselves asked to assign copyright to FSF, upon their project joining GNU. In all other cases, FSF assignment is optional. Some GNU projects, such as GNOME, have their own positions regarding ©AAs that differ radically from FSF's. I seriously doubt that companies who adopt Project Harmony's agreement will ever be as flexible on copyright assignment as FSF, nor will any of the possible Project Harmony options be acceptable to GNOME's existing policy.

    Giving Away Rights to Give Companies Warm Fuzzies?

    Project Harmony, however, claims that the important part isn't its ©AA, but its Contributor License Agreement (CLA). To briefly consider the history of Free Software CLAs, note that the Apache CLA was likely the first CLA used in the Free Software community. Apache Software Foundation has always been heavily influenced by IBM and other companies, and such companies have generally sought the “warm fuzzies” of getting every contributor to formally assent to a complex legal document that asserts various assurances about the code and gives certain powers to the company.

    The main point of a CLA (and a somewhat valid one) is to ensure that the developers have verified their right to contribute the code under the specified copyright license. Both the Apache CLA and Project Harmony's CLA go to great length and verbosity to require developers to agree that they know the contribution is theirs. In fact, if a developer signs one of these CLA's, the developer makes a formal contract with the entity (usually a for-profit company) that the developer knows for sure that the contribution is licensed under the specified license. The developer then takes on all liability if that fact is in any way incorrect or in dispute!

    Of course, shifting away all liability about the origins of the code is a great big “warm fuzzy” for the company's lawyers. Those lawyers know that they can now easily sue an individual developer for breach of contract if the developer was wrong about the code. If the company redistributes some developer's code and ends up in an infringement suit where the company has to pay millions of dollars, they can easily come back and sue the developer0. The company would argue in court that the developer breached the CLA. If this possible outcome doesn't immediately worry you as an individual developer signing a Project Harmony CLA for your FLOSS contribution, it should.

    “Choice of Law” & Contractual Arrangement Muddies Copyright Claims

    Apache's CLA doesn't have a choice of law clause, which is preferable in my opinion. Most lawyers just love a “choice of law” clause for various reasons. The biggest reason is that it means the rules that apply to the agreement are the ones with which the lawyers are most familiar, and the jurisdiction for disputes will be the local jurisdiction of the company, not of the developer. In addition, lawyers often pick particular jurisdictions that are very favorable to their client and not as favorable to the other signers.

    Unfortunately, all of Project Harmony's drafts include a “choice of law” clause1. I expect that the drafters will argue in response that the jurisdiction is a configuration variable. However, the problem is that the company decides the binding of that variable, which almost always won't be the binding that an individual developer prefers. The term will likely be non-negotiable at that point, even though it was configurable in the template.

    Not only that, but imagine a much more likely scenario about the CLA: the company fails to use the outbound license they promised. For example, suppose they promised the developers it'd be AGPL'd forever (although, no such option actually exists in Project Harmony, as described below!), but then the company releases proprietarized versions. The developers who signed the CLA are still copyright holders, so they can enforce under copyright law, which, by itself, would allow the developers to enforce under the laws in whatever jurisdiction suits them (assuming the infringement is happening in that jurisdiction, of course).

    However, by signing a CLA with a “choice of law” clause, the developers agreed to whatever jurisdiction is stated in that CLA. The CLA has now turned what would otherwise be a mundane copyright enforcement action operating purely under the developer's local copyright law into a contract dispute between the developers and the company under the chosen jurisdiction's laws. Obviously that agreement might include AGPL and/or GPL by reference, but the claim of copyright infringement due to violation of GPL is now muddied by the CLA contract that the developers signed, wherein the developers granted some rights and permission beyond GPL to the company.

    Even worse, if the developer does bring action in a their own jurisdiction, their own jurisdiction is forced to interpret the laws of another place. This leads to highly variable and confusing results.

    Problems for Individual Copyright Enforcement Against Third-Parties

    Furthermore, even though individual developers still hold the copyrights, the Project Harmony CLAs grant many transferable rights and permissions to the CLA recipient (again, usually a company). Even if the reasons for requiring that were noble, it introduces a bundle of extra permissions that can be passed along to other entities.

    Suddenly, what was once a simple copyright enforcement action for a developer discovering a copyleft violation becomes a question: Did this violating entity somehow receive special permissions from the CLA-collecting entity? Violators will quickly become aware of this defense. While the defense may not have merit (i.e., the CLA recipient may not even know the violator), it introduces confusion. Most legal proceedings involving software are already confusing enough for courts due to the complex technology involved. Adding something like this will just cause trouble and delays, further taxing our already minimally funded community copyleft enforcement efforts.

    Inbound=Outbound Is All You Need

    Meanwhile, the whole CLA question actually is but one fundamental consideration: Do we need this? Project Harmony's answer is clear: its proponents claim that there is mass confusion about CLAs and no standardization, and therefore Project Harmony must give a standard set of agreements that embody all the options that are typically used.

    Yet, Project Harmony has purposely refused to offer the simplest and most popular option of all, which my colleague Richard Fontana (a lawyer at Red Hat who also opposes Project Harmony) last year dubbed inbound=outbound. Specifically, the default agreement in the overwhelming majority of FLOSS projects is simply this: each contributor agrees to license each contribution using the project's specified copyright license (or a license compatible with the project's license).

    No matter what way you dice Project Harmony, the other contractual problems described above make true inbound=outbound impossible because the CLA recipient is never actually bound formally by the project's license itself. Meanwhile, even under its best configuration, Project Harmony can't adequately approximate inbound=outbound. Specifically, Project Harmony attempts to limit outbound licensing with its § 2.3 (called Outbound License). However, all the copyleft versions of this template include a clause that say: We [the CLA recipient] agree to license the Contribution … under terms of the … licenses which We are using on the Submission Date for the Material. Yet, there is no way for the contributor to reliably verify what licenses are in use privately by the entity receiving the CLA. If the entity is already engaged in, for example, a proprietary relicensing business model at the Submission Date, then the contributor grants permission for such relicensing on the new contribution, even if the rest of § 2.3 promises copyleft. This is not a hypothetical: there have been many cases where it was unclear whether or not a company was engaged in proprietary relicensing, and then later it was discovered that they had been privately doing so for years. As written, therefore, every configuration of Project Harmony's § 2.3 is useless to prevent proprietarization.

    Even if that bug were fixed, the closest Project Harmony gets to inbound=outbound is restricting the CLA version to “FSF's list of ‘recommended copyleft licenses’”. However, this category makes no distinction between the AGPL and GPL, and furthermore ultimately grants FSF power over relicensing (as FSF can change its list of recommended copylefts at will). If the contributors are serious about the AGPL, then Project Harmony cannot assure their changes stay AGPL'd. Furthermore, contributors must trust the FSF for perpetuity, even more than already needed in the -or-later options in the existing FSF-authored licenses. I'm all for trusting the FSF myself in most cases. However, because I prefer plain AGPLv3-or-later for my code, Project Harmony is completely unable to accommodate my licensing preferences to even approximate an AGPL version of inbound=outbound (even if I ignored the numerous problems already discussed).

    Meanwhile, the normal, mundane, and already widely used inbound=outbound practice is simple, effective, and doesn't mix in complicated contract disputes and control structures with the project's governance. In essence, for most FLOSS projects, the copyright license of the project serves as the Constitution of the project, and doesn't mix in any other complications. Project Harmony seeks to give warm fuzzies to lawyers at the expense of offloading liability, annoyance, and extra hoop-jumping onto developers.

    Linux Hackers Ingeniously Trailblazed inbound=outbound

    Almost exactly 10 years ago today, I recall distinctly attending the USENIX 2001 Linux BoF session. At that session, Ted Ts'o and I had a rather lively debate; I claimed that FSF's ©AA assured legal certainty of the GNU codebase, but that Linux had no such assurance. (BTW, even I was confused in those days and thought all GNU packages required FSF's ©AA.) Ted explained, in his usual clear and bright manner, that such heavy-handed methods shouldn't be needed to give legal certainty to the GPL and that the Linux community wanted to find an alternative.

    I walked away skeptically shaking my head. I remember thinking: Ted just doesn't get it. But I was wrong; he did get it. In fact, many of the core Linux developers did. Three years to the month after that public conversation with Ted, the Developer's Certificate of Origin (DCO) became the official required way to handle the “CLA issue” for Linux and it remains the policy of Linux today. (See item 12 in Linux's Documentation/SubmittingPatches file.)

    The DCO, in fact, is the only CLA any FLOSS project ever needs! It implements inbound=outbound in a simple and straightforward way, without giving special powers over to any particular company or entity. Developers keep their own copyright and they unilaterally attest to their right to contribute and the license of the contribution. (Developers can even sign a ©AA with some other entity, such as the FSF, if they wish.) The DCO also gives a simple methodology (i.e., the Signed-off-by: tag) for developers to so attest.

    I admit that I once scoffed at the (what I then considered naïve) simplicity of the DCO when compared to FSF's ©AA. Yet, I've been since convinced that the Linux DCO clearly accomplishes the primary job and simultaneously fits how most developers like to work. ©AA's have their place, particularly when the developers find a trusted organization that aligns with their personal moral code and will enforce copyleft for them. However, for CLAs, the Linux DCO gets the important job done and tosses aside the pointless and pro-corporate stuff.

    Frankly, if I have to choose between making things easy for developers and making them easy for corporate lawyers, I'm going to chose the former every time: developers actually write the code; while, most of the time, company's legal departments just get in our way. The FLOSS community needs just enough CYA stuff to get by; the DCO shows what's actually necessary, as opposed to what corporate attorneys wish they could get developers to do.

    What about Relicensing?

    Admittedly, Linux's DCO does not allow for relicensing wholesale of the code by some single entity; it's indeed the reason a Linux switch to GPLv3 will be an arduous task of public processes to ensure permission to make the change. However, it's important to note that the Linux culture believes in GPLv2-only as a moral foundation and principle of their community. It's not a principle I espouse; most of my readers know that my preferred software license is AGPLv3-or-later. However, that's the point here: inbound=outbound is the way a FLOSS community implements their morality; Project Harmony seeks to remove community license decision-making from most projects.

    Meanwhile, I'm all for the “-or-later” brand of relicensing permission; GPL, LGPL and AGPL have left this as an option for community choice since GPLv1 was published in late 1980s. Projects declare themselves GPLv2-or-later or LGPLv3-or-later, or even (GPLv1-or-later|Artistic) (ala Perl 5) to identify their culture and relicensing permissions. While it would sometimes be nice to have a broad post-hoc relicensing authority, the price for that's expensive: abandonment of community clarity regarding what terms define their software development culture.

    An Anti-Strong-Copyleft Bias?

    Even worse, Project Harmony remains biased against some of the more fine-grained versions of copyleft culture. For example, Allison Randal, who is heavily involved with Project Harmony, argued on Linux Outlaws Episode 204 that Most developers who contribute under a copyleft license — they'd be happy with any copyleft license — AGPL, GPL, LGPL. Yet there are well stated reasons why developers might pick GPL rather than LGPL. Thus, giving a for-profit company (or non-profit that doesn't necessarily share the developers' values) unilateral decision-making power to relicense GPL'd works under LGPL or other weak copyleft licenses is ludicrous.

    In its 1.0 release, Project Harmony attempted to add a “strong copyleft only” option. It doesn't actually work, of course, for the various reasons discussed in detail above. But even so, this solution is just one option among many, and is not required as a default when a project is otherwise copylefted.

    Finally, it's important to realize that the GPLv3, AGPLv3, and LGPLv3 already offer a “proxy option”; projects can name someone to decide the -or-later question at a later time. So, for those projects that use any of the set { LGPLv3-only, AGPLv3-only, GPLv3-only, GPLv2-or-later, GPLv1-or-later, or LGPLv2.1-or-later }, the developers already have mechanisms to move to later versions of the license with ease — by specifying a proxy. There is no need for a CLA to accomplish that task in the GPL family of licenses, unless the goal is to erode stronger copylefts into weaker copylefts.

    This is No Creative Commons, But Even If It Were, Is It Worth Emulation?

    Project Harmony's proponents love to compare the project to Creative Commons, but the comparison isn't particularly apt. Furthermore, I'm not convinced the FLOSS community should emulate the CC license suite wholesale, as some of the aspects of the CC structure are problematic when imported back into FLOSS licensing.

    First of all, Larry Lessig (who is widely considered a visionary) started the CC licensing suite to bootstrap a Free Culture movement that modeled on the software freedom movement (which he spent a decade studying). However, Lessig made some moral compromises in an attempt to build a bridge to the “some rights reserved” mentality. As such, many of the CC licenses — notably those that include the non-commercial (NC) or no-derivatives (ND) terms — are considered overly restrictive of freedom and are therefore shunned by Free Culture activists and software freedom advocates alike.

    Over nearly decade, such advocates have slowly begun to convince copyright holders to avoid CC's NC and ND options, but CC's own continued promulgation of those options lend them undue legitimacy. Thus, CC and Project Harmony make the same mistake: they act amorally in an attempt to build a structure of licenses/agreements that tries to bridge a gulf in understanding between a FaiF community and those only barely dipping their toe in that community. I chose the word amoral, as I often do, to note a situation where important moral principles exist, but the primary actors involved seek to remove morality from the considerations under the guise of leaving decision-making to the “magic of the marketplace”. Project Harmony is repeating the mistake of the CC license suite that the Free Culture community has spent a decade (and counting) cleaning up.


    Please note that IANAL and TINLA. I'm just a community- and individual-developer- focused software freedom policy wonk who has some grave concerns about how these Project Harmony Agreements operate. I can't give you a fine-grained legal analysis, because I'm frankly only an amateur when it comes to the law, but I am an expert in software freedom project policy. In that vein — corporate attorney endorsements notwithstanding — my opinion is that Project Harmony should be abandoned entirely.

    In fact, the distinction between policy and legal expertise actually shows the root of the problem with Project Harmony. It's a system of documents designed by a committee primarily comprised of corporate attorneys, yet it's offered up as if it's a FLOSS developer consensus. Indeed, Project Harmony itself was initiated by Amanda Brock, a for-profit corporate attorney for Canonical, Ltd, who is remains involved in its drafting. Canonical, Ltd. later hired Mark Radcliffe (a big law firm attorney, who has defended GPL violators) to draft the alpha revisions of the document, and Radcliffe remains involved in the process. Furthermore, the primary drafting process was done secretly in closed meetings dominated by corporate attorneys until the documents were almost complete; the process was not made publicly open to the FLOSS community until April 2011. The 1.0 documents differ little from the drafts that were released in April 2011, and thus remain to this day primarily documents drafted in secrecy by corporate attorneys who have only a passing familiarity with software freedom culture.

    Meanwhile, I've asked Project Harmony's advocates many times who is in charge of Project Harmony now, and no one can give me a straight answer. One is left to wonder who decides final draft approval and what process exists to prevent or permit text for the drafts. The process which once was in secrecy appears to be now in chaos because it was opened up too late for fundamental problems to be resolved.

    A few developers are indeed actively involved in Project Harmony. But Project Harmony is not something that most developers requested; it was initiated by companies who would like to convince developers to passively adopt overreaching CLAs and ©AAs. To me, the whole Project Harmony process feels like a war of attrition to convince developers to accept something that they don't necessarily want with minimal dissent. In short, the need for Project Harmony has not been fully articulated to developers.

    Finally, I ask, what's really broken here? The industry has been steadily and widely adopting GNU and Linux for years. GNU, for its part, has FSF assignments in place for much of its earlier projects, but the later projects (GNOME, in particular) have either been against both ©AA's and CLA's entirely, or are mostly indifferent to them and use inbound=outbound. Linux, for its part, uses the DCO, which does the job of handling the urgent and important parts of a CLA without getting in developers' way and without otherwise forcing extra liabilities onto the developers and handing over important licensing decisions (including copyleft weakening ones) to a single (usually for-profit) entity.

    In short, Project Harmony is a design-flawed solution looking for a problem.

    Further Reading

    0Project Harmony advocates will likely claim to their § 5, “Consequential Damage Waiver” protects developers adequately. I note that it explicitly leaves out, for example, statutory damages for copyright infringement. Also, some types of damages cannot be waived (which is why that section shouts at the reader TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW). Note my discussion of jurisdictions in the main text of this article, and consider the fact that the CLA recipient will obviously select a jurisdiction where the fewest possible damages can be waived. Finally, note that the OR US part of that § 5 is optionally available, and surely corporate attorneys will use it, which means that if they violate the agreement, there's basically no way for you to get any damages from them, even if they their promise to keep the code copylefted and fail.

    1Note: Earlier versions of this blog post conflated slightly “choice of venue” with “choice of law”. The wording has been cleared up to address this problem. Please comment or email me if you believe it's not adequately corrected.

    Posted on Thursday 07 July 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-07-04: Weekly Summary Summary, 2011-06-26 through 2011-07-04

    Posted on Monday 04 July 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2011-05-31: Should a Power-User Key Mapping Change Be This Difficult?

    It's been some time since X made me hate computing, but it happened again today (well, yesterday into the early hours of today, actually.

    I got the stupid idea to upgrade to squeeze from lenny yesterday. I was at work, but it was actually a holiday in the USA, and I figured it would be a good time to do some sysadmin work instead of my usual work.

    I admittedly had some things to fix that were my fault: I had backports and other mess installed, but upon removing, the upgrade itself was more-or-less smooth. I faced only a minor problem with my MD device for /boot not starting properly, but the upgrade warned me that I needed to switch to properly using the UUIDs for my RAID arrays, and once I corrected that, all booted fine, even with GRUB2 on my old hardware.

    Once I was in X, things got weird, keyboard-wise. My meta and alt keys weren't working. BTW, I separate Alt from Meta, making my actual Alt key into a meta key, while my lower control is set to an Alt (ala Mod2), since I throw away caps lock and make it a control. (This is for when I'm on the laptop keyboard rather than the HHKB.)

    I've used the same xmodmap for two decades to get this done:

                    keycode 22 = BackSpace
                    clear Mod1
                    clear Mod2
                    clear Lock
                    clear Control
                    keycode 66  = Control_L
                    keycode 64 = Meta_L
                    keycode 113 = Meta_R
                    keycode 37 = Alt_L
                    keycode 109 = Alt_R
                    add Control = Control_L
                    add Mod1 = Meta_L
                    add Mod1 = Meta_R
                    add Mod2 = Alt_L
                    add Mod2 = Alt_R

    This just “doesn't work” in squeeze (or presumably any Xorg 7.5 system). Instead, it just gives this error message:

                    X Error of failed request:  BadValue (integer parameter out of range for operation)
                      Major opcode of failed request:  118 (X_SetModifierMapping)
                      Value in failed request:  0x17
                      Serial number of failed request:  21
                      Current serial number in output stream:  21
    … and while my Control key ends up fine, it leaves me with no Mod1 nor Mod2 key.

    There appear to be at least two Debian bugs (564327 and 432011), which were against squeeze before it was released. In retrospect, I sure wish they'd have been release-critical!. (There's also an Ubuntu bug, which of course just punts to the upstream Debian bug.) There are also two further upstream bugs at freedeskop (20145 and 11822), although Daniel Stone thinks the main problem might be fixed upstream.

    I gather that many people “in the know” believe xmodmap to be deprecated, and we all should have switched to xkb years ago. I even got snarky comments to that effect. (Update:) However, after I made this first post, quite angry after 8 hours of just trying to make my Alt key DTRT, I was elated to see Daniel Stone indicate that xmodmap should be backwards compatible. It's always true that almost every time I get pissed off about some Free Software not working, a developer often shows up and tells me they want to fix it. This is in some ways just as valuable as the thing being fixed: knowing that the developer doesn't want the bug to be there — it means it'll be fixed eventually and only patience is required.

    However, the bigger problem really is that xkb appears to lack good documentation. If any exists, I can't find it. madduck did this useful blog post (and, later, vinc17 showed me some docs he was working on too). These are basically the only things I could find that were real help on the issue, and they were sparse. I was able to learn, after hours, that this should be the rough equivalent to my old modmap:

                    partial modifier_keys
                    xkb_symbols "thinkpad" {
                        replace key <CAPS>  {  [ Control_L, Control_L ] };
                        modifier_map  Control { <CAPS> };
                        replace key <LALT>  {  [ Meta_L ] };
                        modifier_map Mod1   { Meta_L, Meta_R };
                        key <LCTL> { [ Alt_L ] };
                        modifier_map Mod2 { Alt_L };

    But, you can't just load that with a program! No, it must be placed in a file called /path/symbols/bkuhn, which it is then loaded with an incantation like this:

                    xkb_keymap {
                            xkb_keycodes  { include "evdev+aliases(qwerty)" };
                            xkb_types     { include "complete"      };
                            xkb_compat    { include "complete"      };
                            xkb_symbols   { include "pc+us+inet(evdev)+bkuhn(thinkpad)"     };
                            xkb_geometry  { include "pc(pc105)"     };

    …which, in turn, requires to be fed into: xkbcomp -I/path - $DISPLAY as stdin. Oh, did I mention you have to get the majority of that stuff above by running setxkbmap -print, then modify it to add the bkuhn(thinkpad) part? I'm impressed that madduck figured this all out. I mean, I know xmodmap was arcane incantations and all, but this is supposed to be clearer and better for users wanting to change key mappings? WTF!?!

    Oh, so, BTW, my code in /path/symbols/bkuhn didn't work. I tried every incantation I could think of, but I couldn't get it to think about Alt and Meta as separate Mod2 and Mod1 keys. I think it's actually a bug, because weird things happened when I added lines like:

                        modifier_map Mod5 { <META> };
    Namely, when I added the above line to my /path/symbols/bkuhn, the Mod2 was then picked up correctly (magically!), but then both LCTL and LALT acted like a Mod2, and I still had no Mod1! Frankly, I was too desperate to get back to my 20 years of keystroke memory to try to document what was going on well enough for a coherent bug report. (Remember, I was doing all this on a laptop where my control key kept MAKING ME SHOUT INSTEAD OF DOING ITS JOB.)

    I finally got the idea to give up entirely on Mod2 and see if i could force the literal LCTL key to be a Mod3, hopefully allowing Emacs to again see my usual Mod1 Meta expectations for LALT. So, I saw what some of the code in /usr/share/X11/xkb/symbols/altwin did to handle Mod3, and I got this working (although it required a sawfish change to expect Mod3 instead of Mod2, of course, but that part was 5 seconds of search and replace). Here's what finally worked as contents of /path/symbols/bkuhn:

                    partial modifier_keys
                    xkb_symbols "thinkpad" {
                        modifier_map  Control { <CAPS> };
                        replace key <LALT>  {  [ Meta_L ] };
                        modifier_map Mod1   { Meta_L };
                        key <LCTL> { type[Group1] = "ONE_LEVEL",
                                     symbols[Group1] = [ Super_L ] };
                        modifier_map Mod3 { Super_L };

    So, is all this really less arcane than xmodmap? Was the eight hours of my life spent learning xkb was somehow worth it, because now I know a better tool than xmodmap? I realize I'm a power user, but I'm not convinced that it should be this hard even for power users. I felt reminiscent of days when I had to use Eric Raymond's mode timings howto to get X working. That was actually easier than this!

    Even though spot claimed this is somehow Debian's fault, I don't believe him. I bet I would run into the same problem on any system using Xorg 7.5. There are clearly known bugs in xmodmap, and I think there is probably a subtle bug I uncovered that exist xkbd, but I am not sure I can coherently report it without revisiting this horrible computing evening again. Clearly, that first thing I tried should have not made two keys be a Mod2, but only when I moved META into Mod5, right?

    BTW, If you're looking for me online tomorrow early, you hopefully know where I am. I'm going to bed two hours before my usual waketime. Ugh. (Update: tekk later typo'ed xmodmap as ’xmodnap‘ on Quite fitting; after working on that all night, I surely needed an xmodnap!

    Update on 2013-04-03: I want to note that the X11 and now Wayland developer named Daniel Stone took an interest in this bug and actually followed up with me two years later giving me a report. It is apparently really hard to fix without a lot of effort, and I've switched to xkb (which I think is even more arcane), but mostly works, except when I'm in Xnest. But my main point is that Daniel stuck with the problem and while he didn't get resolution, he kept me posted. That's a dedicated Free Software developer; I'm just a random user, after all!

    Posted on Tuesday 31 May 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-05-26: Choosing A License

    Brett Smith of the FSF has announced a new tutorial available on the GNU website that gives advice about picking a license for your project.

    I'm glad that Brett wrote this tutorial. My typical answer when someone asks me which license to chose is to say: Use AGPLv3-or-later unless you can think of a good reason not to. That's a glib answer that is rarely helpful to questioner. Brett's article is much better and more useful.

    For me, the particularly interesting outcome of the tutorial is how it finishes a the turbulent trajectory of the FSF's relationship with Apache's license. Initially, there was substantial acrimony between the Apache Software Foundation and the FSF because version 2.0 of the Apache License is incompatible with the GPLv2, a point on which the Apache Software Foundation has long disagreed with the FSF. You can even find cases where I was opining in the press about this back when I was Executive Director of the FSF.

    An important component of GPLv3 drafting was to reach out and mend relationships with other useful software freedom licenses that had been drafted in the time since GPLv2 was released. Brett's article published yesterday shows the culmination of that fence-mending: Apache-2.0 is now not only compatible with the GPLv3 and AGPLv3, but also the FSF's recommended permissive license!

    Posted on Thursday 26 May 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-05-19: Clarification on Android, its (Lack of) Copyleft-ness, and GPL Enforcement

    I'm grateful to Brian Proffitt for clarifying some of these confusions about Android licensing. In particular, I'm glad I'm not the only one who has cleared up the confusions that Edward J. Naughton keeps spreading regarding the GPL.

    I noted that Naughton even commented on Proffitt's article; the comment spreads even more confusion about the GPL. In particular, Naughton claims that most BusyBox GPL violations are on unmodified versions of BusyBox. That's just absolutely false, if for no other reason that a binary is a modified version of the source code in the first place, and nearly all BusyBox GPL violations involve a binary-only version distributed without any source (nor an offer therefor).

    Mixed in with Naughton's constant confusions about what the GPL and LGPL actually requires, he does have a possible valid point lurking: there are a few components in Android/Linux that are under copyleft licenses, namely Linux (GPL) and Webkit (LGPL). Yet, in all of Naughton's screeching about this issue, I haven't seen any clear GPL or LGPL violation reports — all I see is speculation about what may or may not be a violation without any actual facts presented.

    I'm pretty sure that I've spent more time reading and assessing the veracity of GPL violation reports than anyone on the planet. I don't talk about this part of it much: but there are, in fact, a lot of false alarms. I get emails every week from users who are confused about what the GPL and LGPL actually require, and I typically must send them back to collect more details before I can say with any certainty a GPL or LGPL violation has occurred.

    Of course, as a software freedom advocate, I'm deeply dismayed that Google, Motorola and others haven't seen fit to share a lot of the Android code in a meaningful way with the community; failure to share software is an affront to what the software freedom movement seeks to accomplish. However, every reliable report that I've seen indicates that there are no GPL nor LGPL violations present. Of course, if someone has evidence to the contrary, they should send it to those of us who do GPL enforcement. Meanwhile, despite Naughton's public claims that there are GPL and LGPL violations occurring, I've received no contact from him. Don't you think if he was really worried about getting a GPL or LGPL violation resolved, he'd contact the guy in the world most known for doing GPL enforcement and see if I could help?

    Of course, Naughton hasn't contacted me because he isn't really interested in software freedom. He's interested in getting press for himself, and writing vague reports about Android copyrights and licensing is a way to get lots of press. I put out now a public call to anyone who believes they haven't received source code that they were required to get under GPL or LGPL to get in touch with me and I'll try to help, or at the very least put you in touch with a copyright holder who can help do some enforcement with you. I don't, however, expect to see a message in my inbox from Naughton any time soon, nor do I expect him to actually write about the wide-spread GPL violations related to Android/Linux that Matthew Garrett has been finding. Garrett's findings are the real story about Android/Linux compliance, but it's presumably not headline-getting enough for Naughton to even care.

    Finally, Naughton is a lawyer. He has the skills at hand to actually help resolve GPL violations. If he really cared about GPL violations, he'd offer his pro bono help to copyright holders to assist in the overwhelming onslaught of GPL violations. I've written and spoken frequently about how I and others who enforce the GPL are really lacking in talented person-power to do more enforcement. Yet, again, I haven't received an offer from Naughton or these other lawyers who are opining about GPL non-compliance to help me get some actual GPL compliance done. I await their offers, but I'm certainly not expecting they'll be forthcoming.

    (BTW, you'll notice that I don't link to Naughton's actual article myself; I don't want to give him any more linkage than he's already gotten. I'm pretty aghast at the Huffington Post for giving a far-reaching soapbox to such shoddy commentary, but I suppose that I shouldn't expect better from a company owned by AOL.)

    Posted on Thursday 19 May 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-05-18: Germany Trip: Samba XP Keynote and LinuxTag Keynote

    I just returned a few days ago to the USA after one week in Germany. I visited Göttingen for my keynote at Samba XP (which I already blogged about). Attending Samba XP was an excellent experience, and I thank SerNet for sponsoring my trip there. Since going full-time at Conservancy last year, I have been trying to visit the conferences of each of Conservancy's member projects. It will probably take me years to do this, but given that Samba is one of Conservancy's charter members, it's good that I have finally visited Samba's annual conference. It was even better that they asked me to give a keynote talk at Samba XP.

    I must admit that I didn't follow the details many of the talks other than Tridge's Samba 4 Status Report talk and Jeremy's The Death of File Protocols. This time I really mean it! talk. The rest, unsurprisingly, were highly specific and detailed about Samba, and since I haven't been a regular Samba user myself since 1996, I didn't have the background information required to grok the talks fully. But I did see a lot of excited developers, and it was absolutely wonderful to meet the entire Samba Team for the first time after exchanging email with them for so many years.

    It's funny to see how different communities tend to standardize around the same kinds of practices with minor tweaks. Having visited a lot of project-specific conferences for Conservancy's members, I'm seeing how each community does their conference, and one key thing all projects have in common is the same final conference session: a panel discussion with all the core developers.

    The Samba Team has their own little tweak on this. First, John Terpstra asks all speakers at the conference (which included me this year) to join the Samba Team and stand up in front of the audience. Then, the audience can ask any final questions of all speakers (this year, the attendees had none). Then, the Samba Team stands up in front of the crowd and takes questions.

    The Samba tweak on this model is that the Samba Team is not permitted to sit down during the Q&A. This year, it didn't last that long, but it was still rather amusing. I've never seen a developers' panel before where the developers couldn't sit down!

    After Samba XP, I headed “back” to Berlin (my flight had landed there on Saturday and I'd taken the Deutsche Bahn ICE train to Göttingen for Samba XP), and arrived just in time to attend LinuxNacht, the LinuxTag annual party. (WARNING: name dropping follows!) It was excellent to see Vincent Untz, Lennart Poettering, Michael Meeks and Stefano Zacchiroli at the party (listed in order I saw them at the party).

    The next day I attended Vincent's talk, which was about cross-distribution collaboration. It was a good talk, although, I think Vincent glossed over too much the fact that many distributions (Fedora, Ubuntu, and OpenSUSE, specifically) are controlled by companies and that cross-distribution collaboration has certain complications because of this corporate influence. I talked with Vincent in more detail about this later, and he argued that the developers at the companies in question have a lot of freedom to operate, but I maintain there are subtle (and sometimes, not so subtle) influences that cause problems for cross-distribution collaboration. I also encouraged Vincent to listen to Richard Fontana's talk, Open Source Projects and Corporate Entanglement, that Karen and I released as an episode of the FaiF oggcast.

    I also attended Martin Michlmayr's talk on SPDX. I kibitzed more than I should have from the audience, pointing out that while SPDX is a good “first start”, it's a bit of a “too little, too late” attempt to address and prevent the flood of GPL violations that are now all too common. I believe SPDX is a great tool for those who already are generally in compliance, but it isn't very likely to impact the more common violations, wherein the companies just ignore their GPL obligations. A lively debate ensued on this topic. I frankly hope to be proved wrong on this; if SPDX actually ends or reduces GPL violations, I'll be happy to work on something else instead.

    On Friday afternoon, I gave my second keynote of the week, which was an updated version of my talk, 12 Years of GPL Compliance: A Historical Perspective. It went well, although I misunderstood and thought I had a full hour slot, but only actually had a 50 minute slot, so I had to rush a bit at the end. I really do hate rushing at the end when speaking primarily to a non-native-English-speaking audience, as I know I'm capable of speaking English way too fast (a problem that I am constantly vigilant about under normal public speaking circumstances).

    The talk was nevertheless pretty well received, and afterward, I was surrounded by a gaggle of interested copyleft enthusiasts, who, as always, were asking what more can be done to enforce the GPL. My talks on enforcement always tend to elicit this reaction, since my final slides are a bit depressing with regard to the volume of GPL enforcement that's currently occurring.

    Meanwhile, I also decided I should also start putting up my slides from talks in a more accessible fashion. Since I use S5 (although I hope to switch to jQuery S5 RSN), my slides are trivially web-publishable anyway. While I've generally published the source code to my slides, it makes sense to also make compiled, quickly viewable versions of my slides on my website too. Finally, I realized I should also put my upcoming public speaking events on my frontpage and have done so.

    After a late lunch on Friday, I saw only the very end of Lennart's talk on systemd, and then I visited for a while with Claudia Rauch, Business Manager of KDE, e.V. in the KDE booth. Claudia kindly helped me practice my German a bit by speaking slowly enough that I could actually parse the words.

    I must admit I was pretty frustrated all week that my German is now so poor. I studied German for two years in High School and one semester in college. I even participated in a three-week student exchange trip to a Gymnasium (the German term for college-prep high school) in Munich in 1990. Yet, German speaking skills are just a degraded version of what they once were.

    Meanwhile, I did rather like Berlin's Tegel airport (TXL). It's a pretty small airport, but I really like its layout. Because of its small size, each check-in area is attached to a security checkpoint, which is then directly connected to the gate. While this might seem a bit tight, it makes it very easy to check-in, go through security, and then be right at your gate. I can understand why an airport this small would have to be closed (it's slated for closure in 2012), but I am glad that I got a chance to travel to it (and probably again, for the Desktop Summit) before it closes.

    Posted on Wednesday 18 May 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-05-10: Samba XP Keynote, Jeremy's GPLv3 talk, & GPLv2/LGPLv3

    This morning, I gave the keynote talk at Samba XP. I was really honored to be invited to speak to Samba XP (the Samba Developers and Users Conference).

    My talk, entitled Samba, GPL Enforcement, and the GPLv3 was about GPL enforcement, and how it relates to the Samba project and embedded devices. I've pushed my slides to my gitorious “talks” project. That's of course just the source code of the slides. Previously, some folks have complained that they have trouble building the slides because they don't have pandoc or other such dependencies installed. (I do, BTW, believe that my Installation Information is adequate, even though the talk isn't GPLv3'd, but it does have some dependencies :). Anyway, I've put up an installed version of my Samba XP slides as well.

    Some have asked if there's a recording of the talk. I see video cameras and the like here at Samba XP, and I will try to get the audio for a future FaiF Cast.

    Speaking of FaiFCast, Karen and I timed it (mostly by luck) so that, while I'm at Samba XP, we'd release FaiF 0x0F, which includes audio from Jeremy's Linux Collaboration Summit talk about why Samba chose to switch to GPLv3. BTW, I'm sorry I didn't do show notes this week, but because of being at Samba XP the last few days, I wasn't able to write detailed show notes. However, the main thing you need are Jeremy's slides, which are linked to from the show notes section.

    Later this week, I'm giving the Friday keynote at Linux Tag, also on GPL enforcement (It's at 13:00 on Friday 2011-05-13). I hope those of you who can come to Berlin will come see my talk!

    Finally, Ivo de Decker in the audience at Samba XP asked about LGPLv3/GPLv2 incompatibility. In my answer to the question, I noted the GPL Compatibility Matrix on the GNU site. Also, regarding the specific LGPLv3 compatibility issue, I mentioned post I made last year on the GNOME desktop-devel-list about the LGPLv3/GPLv2 issue. I promised that I'd also quote that post here in my blog, so that there was a stable URL that discussed the issue. I therefore quote the relevant parts of that email here:

    The most important point [about GPLv2-only/LGPLv3-or-later incompatibility], I'd like to make is to suggest a possible compromise. Specifically, I suggest disjunctive licensing, (GPLv2|LGPLv3-or-later), which could be implemented like this:

    This program's license gives you software freedom; you can copy, modify, convey, propagate, and/or redistribute this software under the terms of either:

    • the GNU Lesser General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
    • OR
    • the GNU General Public License, version 2 only, as published by the Free Software Foundation.

    In addition, when you convey, distribute, and/or propagate this software and/or modified versions thereof, you may also preserve this notice so that recipients of such distributions will also have both licensing options described above.

    A good moniker for this license is (GPLv2|LGPLv3-or-later). It actually gives 3+ licensing options to downstream: they can continue under the full (GPLv2|LGPLv3-or-later), or they can use GPLv2-only, or they can use LGPLv3 (or any later version of the LGPL).

    Some folks will probably note this isn't that different from LGPLv2.1-or-later. The key difference, though, is that it removes LGPLv2.1 from the mix. If you've read the LGPLv2.1 lately, you've seen that it really shows its age. LGPLv3 is a much better implementation of the weak copyleft idea. If any license needs deprecation, it's LGPLv2.1. I thus personally believe upgrade to (GPLv2|LGPLv3-or-later) is something worth doing right away.

    I note, BTW, that existing code licensed LGPLv2.1-or-later has also already given permission to migrate to the license (GPLv2|LGPLv3-or-later). Specifically, it's permitted by LGPLv2.1 to license the work under GPLv2 if you want to. Furthermore, LGPLv2.1-or-later permits you to license LGPLv3-or-later. Therefore, LGPLv2.1-or-later can, at anyone's option, be upgraded to (GPLv2|LGPLv3-or-later).

    Note the incompatibility exists on both [GPLv2-only and LGPLv3] sides (it proverbially takes two to tango), but the incompatibility centers primarily around the strong copyleft on the GPLv2 side, not the weak copyleft on the LGPLv3 side. Specifically, GPLv2 requires that:

    You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License.
    You may not impose any further restrictions on the recipients' exercise of the rights granted herein.

    This is part of the text that creates copyleft: making sure that other terms can't be imposed.

    The problem occurs in interaction with another copyleft license (even a weak one). Usually, no two copyleft implementations are isomorphic and therefore there are different requirements in the details. LGPLv3, for its part, doesn't care much about additional restrictions imposed by another license (hence its weak copyleft nature). However, from the point of view of the GPLv2-side observer, any additional requirement, even minor ones imposed by LGPLv3, are merely “further restrictions”.

    This is why copyleft licenses, when they want compatibility, have to explicitly permit relicensing (as LGPLv2 does for GPLv2/GPLv3 and as LGPLv3 does for GPLv3), by allowing you to “upgrade” to the another copyleft from the current copyleft. To be clear, from the point of view the LGPLv3 observer, it has no qualms about “upgrading” from LGPLv3 to GPLv2. The problem occurs from the GPLv2 side, specifically because the (relatively) minor things that LGPLv3 requires are written differently from the similar things asked for in GPLv2.

    It's a common misconception that LGPL has no licensing requirements whatsoever on “works that use the library” (LGPLv2) or the “Application” (LGPLv3). That's not completely true; for example, in LGPLv3 § 4+5 (and LGPLv2.1 § 6+7), you find various requirements regarding licensing of such works. Those requirements aren't strict and are actually very easy to comply with. However, from GPLv2's point of view, they are “further restrictions” since they are not written exactly in the same fashion in GPLv2.

    (BTW, note that LGPLv2.1's compatibility with GPLv2 and/or GPLv3 comes explicitly from LGPLv2.1's Section 3, which allows direct upgrade to GPLv2 or GPLv3, or to any later version published by FSF).

    I hope the above helps some to clarify the GPLv2/LGPLv3 incompatibility.

    Posted on Tuesday 10 May 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-05-03: Mono Developers Losing Jobs Isn't Good

    Both RMS and I have been critical of Mono, which is an implementation of Microsoft's C# language infrastructure for GNU/Linux systems. (Until recently, at Novell, Miguel De Icaza has led a team of developers working on Mono.)

    Most have probably heard that the Attachmate acquisition of Novell completed last week, and that reports of who will be fired because of the acquisition have begun to trickle. This evening, it's been reported that the developers working on Mono will be among those losing their jobs.

    In the last few hours, I've seen some folks indicating that this is a good outcome. I worry that this sort of response is somehow inspired by the criticisms and concerns about Mono that software freedom advocates like myself raised. I thus seek to clarify the concerns regarding Mono, and point out why it's unfortunate that these developers won't work on Mono anymore.

    First of all, note that the concerns about Mono are that many Microsoft software patents likely read on any C# implementation, and Microsoft's so-called “patent promise” is not adequate to defend the software freedom community. Anyone who uses Mono faces software patent danger from Microsoft. This is precisely why using Mono to write new applications, targeted for GNU/Linux and other software freedom systems, should be avoided.

    Nevertheless, Mono should exist, for at least one important reason: some developers write lots and lots of new code on Microsoft systems in C#. If those developers decide they want to abandon Microsoft platforms tomorrow and switch to GNU/Linux, we don't want them to change their minds and decide to stay with Microsoft merely because GNU/Linux lacks a C# implementation. Obviously, I'd support convincing those developers to learn another language system so they won't write more code in C#, but initially, the lack of Free Software C# implementation might impede their switch to Free Software.

    This is a really subtle point that has been lost in the anti-Mono rhetoric. I am not aware of any software freedom advocate who wants Mono to cease to exist. The problem that I and others point out is this: it's dangerous to write new code that relies on technology that's likely patented by Microsoft — a company that's known to shake down or even sue Free-Software-using companies over patents. But the value of Mono (while much more limited than its strongest proponents claim) is still apparent and real: it has a good chance to entice developers living in a purely Microsoft environment to switch to a software freedom environment. It was therefore valuable that Novell was funding developers to work on Mono; it's a bad outcome for software freedom that those developers will lose their jobs. Finally, while perhaps some of those developers might get jobs working on more urgent Free Software tasks, many will likely end up in jobs doing proprietary software development. And developers switching from Free Software work to proprietary software work is surely always a loss for software freedom.

    Update (2011-05-04): ciarang pointed out to me that Mono for Android is proprietary software. As such, it's certainly better if no one is working on that proprietary project anymore. However, I would make an educated guess that most of the employed Mono developers at Novell were working on the Free Software components, so the above analysis in the main blog post still likely applies in most cases.

    Posted on Tuesday 03 May 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2011-04-29: Hopefully My Voice Will Hold Out

    Those of you that follow me on already know that I caught a rhinovirus, and was very sick while at the 2011 Linux Collaboration Summit (LCS). Unfortunately, the illness got worse since I “worked through” it while at LCS, and I was too sick to work the entire week afterward (the week of 2011-04-11).

    I realized thereafter that, before the conference, I forgot to even mention online that I was speaking and chairing the legal track at LCS. I can't blame that on the illness, since I should have noted it on my blog the week before.

    So, just barely, I'm posting ahead of time about my appearances this weekend at LinuxFest Northwest (LFNW). I have been asked to give four (!) talks in two days; and unfortunately three are scheduled almost right in a row in one day (I begged the organizers to fix it so I was giving two each day, but they'd already locked in the schedule, and even though I told them within hours of the schedule going up, they weren't able to change it.)

    It's a rather amusing story how I ended up giving four talks. Most of you that go to many conferences (and particularly those that speak at them) know that the hardest part of speaking is preparing a new talk. I learned in graduate school that you must practice talks to keep the quality high, and if a talk is new, I usually try to practice twice. That's a pretty large time investment, not to mention the research that has to go into a talk.

    So, what I typically do is have between three and five talks that are “active” on my playlist. I'll keep a talk in rotation for about ten to eighteen months and then discontinue it (unless there's new at least 40% new material that I can cycle into, which I sort of consider more-or-less a new talk).

    Often, I'll submit up to four active talks to a given conference. I do this for a couple of reasons. The first and foremost reason is to give choice to the program chairs. If I'm prepared to speak on an array of topics, I'd rather offer up what I can to the chairs so that they can pick the best fit for the track they wish to construct. The second reason is, quite frankly, is for when I really want to go to a conference. My employer only funds my travel if I am speaking at a conference, so sometimes, if I really want to go, I have to increase my odds as much as possible that a talk will be accepted. Multiple submissions usually help in this regard (although I can imagine it may hurt one's chances in some rare cases).

    Now, something happened with LFNW that's never happened to me before: the organizers accepted three of my four talk submissions, and wait-listed one of them! I wrote to them immediately telling them I was honored they wanted so many of my talks, and that I was of course happy to give all of them if they really wanted me to. Then, I happened to be working on my talks last weekend when the LFWN organizers were updating the schedule, and suddenly, I reloaded the page and saw they'd added the fourth talk as well!

    So, in the next two days, I'm giving four talks at LFNW! Most of them are talks I've given before (or at least, given substantially similar talks), so I am not worried about preparation (although I may have to skip any social events on Saturday night to practice the three-in-row for Sunday). What I'm worried about is that my voice has just recovered in the last few days from that long-lasting illness, and I am a bit afraid it won't hold out through all four. So, if you're at LFNW and notice I'm more quiet than usual in the hallway conversations (I'm not known for my silence, after all ;), it's because I'm saving my voice for my talks!

    Anyway, here's the run down of my LFWN talks:

    If you're not able to attend LFNW, I'll try to live-dent as much as I can (when I'm not speaking, which will actually be almost half the conference ;). Watch my stream for the #lfnw tag. In particular, I'm really looking forward to Tom “spot” Callaway's talk. I really want to understand his reasoning for not signing the Chromium CLA, since, as Fontana suggests, it might illuminate the reasoning why developers might oppose CLAs for permissively licensed projects.

    By way of previews of what conferences I'll be at soon (I'll try to blog more fully about them a week before they start), I'll be giving keynotes at both Samba XP and LinuxTag in a few weeks (both about GPL compliance). I'll also be speaking about GPL compliance at OSCON in late July, and I might be on a panel at the Desktop Summit. I hope to see many of you at one of these events.

    I should also apologize to the excellent folks who run RMLL (aka the Libre Software Meeting) in France each year. When I came back so ill from LCS and lost that whole week of work because of it, I took a hard look at my 2011 travel schedule and I just had to cut something. I'm sorry it had to be RMLL, but I hope to make it up to them in a future year. (I actually had to do something similar to the LFNW guys in 2010, which I'm about to make up for this weekend!)

    Posted on Friday 29 April 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2011-03-18: Questioning The Original Analysis On The Bionic Debate

    I was hoping to avoid having to comment further on this problematic story. I figured a comment as a brief statement was enough when it was just a story on the Register. But, it's now hit a major tech news outlet, and I feel that, given that I'm typically the first person everyone in the Free Software world comes to ask if something is a GPL violation, I'm going to get asked about this soon, so I might as well preempt the questions with a blog post, so I can answer any questions about it with this URL.

    In short, the question is: Does Bionic (the Android/Linux default C library developed by Google) violate the GPL by importing “scrubbed” headers from Linux? For those of you seeking TL;DR version: You can stop now if you expect me to answer this question; I'm not going to. I'm just going to show that the apparent original analysis material that started this brouhaha is a speculative hypothesis which would require much more research to amount to anything of note.

    Indeed, the kind of work needed to answer these questions typically requires the painstaking work of a talented developer working very closely with legal counsel. I've done analysis like this before for other projects. The only one I can easily talk about publicly is the ath5k situation. (If you want to hear more on that, you can listen to an old oggcast where I discussed this with Karen Sandler or read papers that were written on the subject back where I used to work.)

    Anyway, most of what's been written about this subject of the Linux headers in Bionic has been poorly drafted speculation. I suppose some will say this blog post is no better, since I am not answering any questions, but my primary goal here is to draw attention that absolutely no one, as near as I can tell, has done the incredibly time consuming work to figure out anything approaching a definitive answer! Furthermore, the original article that launched this debate (Naughton's paper, The Bionic Library: Did Google Work Around the GPL?) is merely a position paper for a research project yet to be done.

    Naughton's full paper gives some examples that would make a good starting point for a complete analysis. It's disturbing, however, that his paper is presented as if it's a complete analysis. At best, his paper is a position statement of a hypothesis that then needs the actual experiment to figure things out. That rigorous research (as I keep reiterating) is still undone.

    To his credit, Naughton does admit that only the kind of analysis I'm talking about would yield a definitive answer. You have to get almost all the way through his paper to get to:

    Determining copyrightability is thus a fact-specific, case-by-case exercise. … Certainly, sorting out what is and isn’t subject to GPLv2 in Bionic would require at least a file-by-file, and most likely line-by-line, analysis of Bionic — a daunting task[.]
    Of course, in that statement, Naughton makes the mistake of subtly including an assumption in the hypothesis: he fails to acknowledge clearly that it's entirely possible the set of GPLv2-covered work found in Bionic could be the empty set; he hasn't shown it's not the empty set (even notwithstanding his very cursory analysis of a few files).

    Yet, even though Naughton admits full analysis (that he hasn't done) is necessary, he nevertheless later makes sweeping conclusions:

    The 750 Linux kernel header files … define a complex overarching structure, an application programming interface, that is thoughtfully and cleverly designed, and almost assuredly protected by copyright.
    Again, this is a hypothesis, that would have be tested and proved with evidence generated by the careful line-by-line analysis Naughton himself admits is necessary. Yet, he doesn't acknowledge that fact in his conclusions, leaving his readers (and IMO he's expecting to dupe lots of readers unsophisticated on these issues) with the impression he's shown something he hasn't. For example, one of my first questions would be whether or not Bionic uses only parts of Linux headers that are required by specification to write POSIX programs, a question that Naughton doesn't even consider.

    Finally, Naughton moves from the merely shoddy analysis to completely alarmist speculation with:

    But if Google is right, if it has succeeded in removing all copyrightable material from the Linux kernel headers, then it has unlocked the Linux kernel from the restrictions of GPLv2. Google can now use the “clean” Bionic headers to create a non-GPL’d fork of the Linux kernel, one that can be extended under proprietary license terms. Even if Google does not do this itself, it has enabled others to do so. It also has provided a useful roadmap for those who might want to do the same thing with other GPLv2-licensed [sic] programs, such as databases.

    If it turns out that Google has succeeded in making sure that the GPLv2 does not apply to Bionic, then Google's success is substantially more narrow. The success would be merely the extraction of the non-copyrightable facts that any C library needs to know about Linux to make a binary run when Linux happens to be the kernel underneath. Now, it should be duly noted that there already exist two libraries under the LGPL that have already implemented that (namely, glibc, and uClibc — the latter of which Naughton's cursory research apparently didn't even turn up). As it stands, anyone who wants to write user-space applications on a Linux-based system already can; there are multiple C library choices available under the weak copyleft license, LGPL. Google, for its part, believes they've succeed at is to make a permissively licensed third alternative, which is an outcome that would be no surprise to us who have seen something like it done twice before.

    In short, everyone opining here seems to be conflating a lot of issues. There are many ways to interface with Linux. Many people, including me, believe quite strongly that there is no way to make a subprogram in kernel space (such as a device driver) without the terms of the GPLv2 applying to it. But writing a device driver is a specialized task that's very different from what most Linux users do. Most developers who “use Linux” — by which they typically mean write a user space program that runs on a GNU/Linux operating system — have (at most) weak copyleft (LGPL) terms to follow due to glibc or uClibc. I admit that I sometimes feel chagrin that proprietary applications can be written for GNU/Linux (and other Linux-based) systems, but that was a strategic decision that RMS made (correctly) at the start of the GNU project one that the Linux project, for its part, has also always sought.

    I'm quite sure no one — including hard-core copyleft advocates like me — expects nor seeks the GPLv2 terms to apply to programs that interface with Linux solely as user-space programs that runs on an operating system that uses Linux as its kernel. Thus, I'd guess that even if it turned out that Google made some mistakes in this regard for Bionic, we'd all work together to rectify those mistakes so that the outcome everyone intended could occur.

    Moreover, to compare the specifics of this situation to other types of so-called “copyleft circumvention techniques” is just link-baiting that borders on trolling. Google wasn't seeking to circumvent the GPL at all; they were seeking to write and/or adapt a permissively licensed library that replaced an LGPL'd one. I'm of course against that task on principle (I think Google should have just used glibc and/or uClibc and required LGPL-compliance by applications). But, to deny that it's possible to rewrite a C library for Linux under a license that isn't GPLv2 would also imply immediately the (incorrect) conclusion that uClibc and glibc are covered by the GPLv2, and we are all quite sure they aren't; even Naughton himself admits that (regarding glibc).

    Google may have erred; no one actually knows for sure at this time. But the task they sought to do has been done before and everyone intended it to be permitted. The worst mistake of which we might ultimately accuse Google is inadvertently taking a copyright-infringing short-cut. If someone actually does all the research to prove that Google did so, I'd easily offer a 1,000-to-1 bet to anyone that such a copyright infringement could be cleared up easily, that Bionic would still work as a permissively licensed C library for Linux, and the implications of the whole thing wouldn't go beyond: “It's possible to write your own C library for Linux that isn't covered by the GPLv2” — a fact which we've all known for a decade and a half anyway.

    Update (2011-03-20): Many people, including slashdot, have been linking to this comment by RMS on LKML about .h files. It's important to look carefully at what RMS is saying. Specifically, RMS says that sometimes #include'ing a .h file creates a copyright derivative work, and sometimes it doesn't; it depends on the details. Then, RMS goes to talk on some rules of thumb that can help determine the outcome of the question. The details are what matters; and those are, as I explain in the main post above, what requires careful analysis done jointly and in close collaboration between a developer and a lawyer. There is no general rule of thumb that always immediately leads one to the right answer on this question.

    Posted on Friday 18 March 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-03-11: Thoughts On GPL Compliance of Red Hat's Linux Distribution

    Today, I was interviewed by Sam Varghese about whether Red Hat's current distribution policies for the kernel named Linux are GPL-compliant. You can read there that AFAICT they are, and have been presented with no evidence to the contrary.

    Last week, when the original story broke, I happened to be at the Linux Foundation's End User Summit, and I had a rather extensive discussion with attendees there about this issue, including Jon Corbet, who wrote an article about it. In my mind, the issue was settled after that discussion, and I had actually put out of my mind, until I realized (when Varghese contacted me for an interview) that people had conflated my previous blog post from last weekend as being a comment specifically on the kernel distribution issue. (I'd been otherwise busy this week, and thus hadn't yet seen Jake Edge's follow-up article on LWN (to which I respond to in detail below).)

    (BTW, on this issue please note that my analysis below is purely a GPLv2 analysis. GPLv3 analysis may be slightly different here, but since, for the moment, the issue relates to the kernel named Linux which is currently licensed GPLv2-only, discussing GPLv3 in this context is a bit off-topic.)

    Preferred Form For Modification

    I have been a bit amazed to watch that so much debate on this has happened around the words of preferred form of the work for making modifications to it from GPLv2§3. In particularly, I can't help chuckling at the esoteric level to which many people believe they can read these words. I laugh to myself and think: not a one of these people commenting on this has ever tried in their life to actually enforce the GPL.

    To be a bit less sardonic, I agree with those who are saying that the preferred form of modification should be the exact organization of the bytes as we would all like to have them to make our further work on the software as easy as possible. But I always look at GPL with an enforcers' eye, and have to say this wish is one that won't be fulfilled all the time.

    The way preferred form for modification ends up working out in GPLv2 enforcement is something more like: you must provide complete sources that a sufficiently skilled software developer can actually make use of it without any reverse engineering. Thus, it does clearly prohibit things like source on cuneiform tablet that Branden mentions. (BTW, I wonder if Branden knows we GPL geeks started using that as an example circa 2001.) GPLv2 also certainly prohibits source obfuscation tools that Jake Edge mentions. But, suppose you give me a nice .tar.bz2 file with all the sources organized neatly in mundane ASCII files, which I can open up with tar xvf, cd in, type make and get a binary out of those sources that's functional and feature-equivalent to your binaries, and then I can type make install and that binary is put into the right place on the device where your binary runs. I reboot the device, and I'm up and running with my newly compiled version rather than the binary you gave me. I'd call that scenario easily GPLv2 compliant.

    Specifically, ease of upstream contribution has almost nothing to do with GPL compliance. Whether you get some software in a form the upstream likes (or can easily use) is more or less irrelevant to the letter of the license. The compliance question always is: did their distribution meet the terms required by the GPL?

    Now, I'm talking above about the letter of the license. The spirit of the license is something different. GPL exists (in part) to promote collaboration, and if you make it difficult for those receiving your distributions to easily share and improve the work with a larger community, it's still a fail (in a moral sense), but not a failure to comply with the GPL. It's a failure to treat the community well. Frankly, no software license can effectively prevent annoying and uncooperative behavior from those who seek to only follow the exact letter of the rules.

    Prominent Notices of Changes

    Meanwhile, what people are actually complaining about is that Red Hat RHEL customers have access to better meta-information about why various patches were applied. Some have argued (quite reasonably) that this information is required under GPLv2§2(a), but usually that section has been interpreted to allow a very terse changelog. Corbet's original article mentioned that the Red Hat distribution of the kernel named Linux contains no changelog. I see why he said that, because it took me some time to find it myself (and an earlier version of this very blog post was therefore incorrect on that point), but the src.rpm file does have what appears to be a changelog embedded in the kernel.spec file. There's also a simple summary as well that in release notes found in a separate src.rpm (in the file called kernel.xml). This material seems sufficient to me to meet the letter-of-the-license compliance for GPLv2§2(a) requirements. I, too, wish the log were a bit more readable and organized, but, again, the debate isn't about whether there's optimal community cooperation going on, but rather whether this distribution complies with the GPL.

    Relating This to the RHEL Model

    My previous blog post, which, while it was focused on answering the question of whether or not Fedora is somehow inappropriately exploited (via, say, proprietary relicensing) to build the RHEL business model, also addressed the issue whether RHEL's business model is GPL-compliant. I didn't think about that blog post in connection with the distribution of the kernel named Linux issue, but even considering that now, I still have no reason to believe RHEL's business model is non-compliant. (I continue to believe it's unfriendly, of course.)

    Varghese directly asked me if I felt the if you exercise GPL rights, then your money's no good here business model is an additional restriction under GPLv2. I don't think it is, and said so. Meanwhile, I was a bit troubled by the conclusions Jake Edge came to regarding this. First of all, I haven't forgotten about Sveasoft (geez, who could?), but that situation came up years after the RHEL business model started, so Jake's implication that Sveasoft “tried this model first” would be wrong even if Sveasoft had an identical business model.

    However, the bigger difficulty in trying to use the Sveasoft scenario as precedent (as Jake hints we should) is not only because of the “link rot” Jake referenced, but also because Sveasoft frequently modified their business model over a period of years. There's no way to coherently use them as an example for anything but erratic behavior.

    The RHEL model, by contrast, AFAICT, has been consistent for nearly a decade. (It was once called the “Red Hat Advanced Server”, but the business model seems to be the same). Notwithstanding Red Hat employees themselves, I've never talked to anyone who particularly likes the RHEL business model or thinks it is community-friendly, but I've also never received a report from someone that showed a GPL violation there. Even the “report” that first made me aware of the RHEL model, wherein someone told me: I hired a guy to call Red Hat for service all day every day for eight hours a day and those jerks at Red Hat said they were going to cancel my contract didn't sound like a GPL violation to me. I'd cancel the guy's contract, too, if his employee was calling me for eight hours a day straight!

    More importantly, though, I'm troubled that Jake indicates the RHEL model requires people to trade their GPL rights for service, because I don't think that's accurate. He goes further to say that terminat[ing] … support contract for users that run their own kernel … is another restriction on exercising GPL rights; that's very inaccurate. Refusing to support software that users have modified is completely different from restricting their right to modify. Given that the GPL was designed by a software developer (RMS), I find it particularly unlikely that he would have intended GPL to require distributors to provide support for any conceivable modification. What software developers want a license that puts that obligation hanging over their head?

    The likely confusion here is using the word “restriction” instead of “consequence”. It's undeniable that your support contractors may throw up their hands in disgust and quit if you modify the software in some strange way and still expect support. It might even be legitimately called a consequence of choosing to modify your software. But, you weren't restricted from making those modifications — far from it.

    As I've written about before, I think most work should always be paid by the hour anyway, which is for me somewhat a matter of personal principle. I therefore always remain skeptical of any software business model that isn't structured around the idea of a group of people getting paid for the hours that they actually worked. But, it's also clear to me that the GPL doesn't mandate that “hourly work contracts” are the only possible compliant business model; there are clearly others that are GPL compliant, too. Meanwhile, it's also trivial to invent a business model that isn't GPL compliant — I see such every day, on my ever-growing list of GPL violating companies who sell binary software with no source (nor offer therefor) included. I do find myself wishing that the people debating whether the exact right number of angels are dancing on the head of this particular GPL pin would instead spend some time helping to end the flagrant, constant, and obvious GPL violations with which I spent much time dealing time each week.

    On that note, if you ever think that someone is violating the GPL, (either for an esoteric reason or a mundane one), I hope that you will attempt to get it resolved, and report the violation to a copyright holder or enforcement agent if you can't. The part of this debate I find particularly useful here is that people are considering carefully whether or not various activities are GPL compliant. To quote the signs all over New York City subways, If you see something, say something. Always report suspicious activity around GPL software so we find out together as a community if there's really a GPL violation going on, and correct it if there is.

    Posted on Friday 11 March 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-03-05: The Slur “Open Core”: Toward More Diligent Analysis

    I certainly deserve some of the blame, and for that I certainly apologize: the phrase “Open Core” has apparently become a slur word, used by those who wish to discredit the position of someone else without presenting facts. I've done my best when using the term to also give facts that backed up the claim, but even so, I finally abandoned the term back in November 2010, and I hope you will too.

    The story, from my point of view, began seventeen months ago, when I felt that “Open Core” was a definable term and that behavior was a dangerous practice. I gave it the clear definition that I felt reflected problematic behavior, as I wrote at the time:

    Like most buzzwords, Open Core has no real agreed-upon meaning. I'm using it to describe a business model whereby some middleware-ish system is released by a single, for-profit entity copyright holder, who requires copyright-assigned changes back to the company, and that company sells proprietary add-ons and applications that use the framework.

    Later — shortly after I pointed out Mark Shuttleworth's fascination with and leanings towards this practice — I realized that it was better to use the preexisting, tried-and-true term for the practice: “proprietary relicensing”. I've been pretty consistent in avoiding the term “Open Core” since then. I called on Shuttleworth to adopt the FSF's recommendations to show Canonical, Ltd. isn't seeking proprietary relicensing and left the whole thing at that. (Shuttleworth, of course, has refused to even respond, BTW.)

    Sadly, it was too late: I'd help create a monster. A few weeks later, Alexandre Oliva (whose positions on the issue of proprietary software inside the kernel named Linux I definitely agree with) took it a step too far and called the kernel named Linux an “Open Core” project. Obviously, Linux developers don't and can't engage in proprietary relicensing; some just engage in a “look the other way” mentality with regard to proprietary components inside Linux. At the time, I said that the term “Open Core” was clearly just too confusing to analyze a real-world licensing situation.

    So, I just stopped calling things “Open Core”. My concerns currently are regarding the practice of collecting copyright assignments to copyleft software and engaging in proprietary relicensing activity, and I've focused on advocating against that specific practice. That's what I've criticized Canonical, Ltd. for doing — both with their existing copyright assignment policies and with their effort to extend those policies community-wide with the manipulatively named “Project Harmony”.

    Shuttleworth, for his part, is now making use the slur phrase I'd inadvertently help create. Specifically, a few days ago, Shuttleworth accused Fedora of being an “Open Core” product.

    I've often said that Fedora is primarily a Red Hat corporate project (and it's among the reasons that I run Debian rather than Fedora). However, since “Open Core” clearly still has no agreed-upon meaning, when I read what Shuttleworth said, I considered the question of whether his claim had any merit (using the “Open Core” definition I used myself before I abandoned the term). Put simply, I asked myself the question: Does Red Hat engaged in “proprietary relicensing of copyleft software with mandatory copyright assignment or non-copyleft CLA“ with Fedora?.

    Fact is, despite having serious reservations about how the RHEL business model works, I have no evidence to show that Red Hat requires copyright assignment or a mandatory non-copyleft CLA on copyleft projects on any products other than Cygwin. So, if Shuttleworth had said: Cygwin is Red Hat's Open Core product, I would still encourage him that we should all now drop the term “Open Core”, but I would also agree with him that Cygwin is a proprietary-relicensed product and that we should urge Red Hat to abandon that practice. (Update: It's also been noted by Fontana on (although the statement was subsequently deleted by the user) that some JBoss projects require permissive CLAs but licenses back out under LGPL, so that would be another example.)

    But does Fedora require contributors to assign copyright or do non-copyleft licensing? I can't find the evidence, but there are some confusing facts. Fedora has a Contributor Licensing Agreement (CLA), which, in §1(D), clearly allows contributors to chose their own license. If the contributor accepts all the defaults on the existing Fedora CLA, the contributor gives a permissive license to the contribution (even for copyleft projects). Fortunately, though, the author can easily copyleft a work under the agreement, and it is still accepted by Fedora. (Contrast this with Canonical, Ltd.'s mandatory copyright assignment form, which explicitly demands Canonical, Ltd.'s power for proprietary relicensing.)

    While Fedora's current CLA does push people toward permissive licensing of copylefted works, the new draft of the Fedora CLA is much clearer on this point (in §2). In other words, the proposed replacement closes this bug. It thus seems to me Red Hat is looking to make things better, while Canonical, Ltd. hoodwinks us and is manufacturing consent in Project “Harmony” around a proprietary copyright-grab by for-profit corporations. When I line up the two trajectories, Red Hat's slowly getting better, and Canonical, Ltd. is quickly getting worse. Thus, Shuttleworth, sitting in his black pot, clearly has no right say that the slightly brown kettle sitting next to him is black, too.

    It could be that Shuttleworth is actually thinking of the RHEL business model itself, which is actually quite different than proprietary relicensing. I do have strong, negative opinions about the RHEL business model; I have long called it the if you like copyleft, your money is no good here business model. It's a GPL-compliant business model merely because the GPL is silent on whether or not you must keep someone as your customer. Red Hat tells RHEL customers that if they chose to engage in their rights under GPL, then their support contract will be canceled. I've often pointed out (although this may be the first time publicly on the Internet) that Red Hat found a bright line of GPL compliance, walked right up to it, and were the first to stake out a business model right on the line. (I've been told, though, that Cygnus experimented with this business model before being acquired by Red Hat.) This practice is, frankly, barely legitimate.

    Ironically, RMS and I used to say that Canonical, Ltd.'s new business model of interest — proprietary relicensing (once trailblazed by MySQL AB) — was also barely legitimate. In one literal sense, that's still true: it's legitimate in the sense that it doesn't violate GPL. In the sense of software freedom morality, I think proprietary relicensing harms the Free Software community too much, and that it was therefore a mistake to ever tolerate it.

    As for RHEL's business model, I've never liked it, but I'm still unsure (even ten years later since its inception) about its software freedom morality. It doesn't seem as harmful as proprietary relicensing. In proprietary licensing, those mistreated under the model are the small business and individual developers who are pressured to give up their copyleft rights lest their patches be rejected or rewritten. The small entities are left to chose between maintaining a fork or giving over proprietary corporate control of the codebase. In RHEL's business model, by contrast, the mistreated entities are large corporations that are forced to choose between exercising their GPL rights and losing access to the expensive RHEL support. It seems to me that the RHEL model is not immoral, but I definitely find it unfriendly and inappropriate, since it says: if you exercise software freedom, you can't be our customer.

    However, when we analyze these models that occupy the zone between license legitimacy and software freedom morality, I think I've learned from the mistake of using slur phrases like “Open Core”. From my point of view, most of these “edge” business models have ill effects on software freedom and community building, and we have to examine their nuances mindfully and gage carefully the level of harm caused. Sometimes, over time, that harm shows itself to be unbearable (as with proprietary relicensing). We must stand against such models and meanwhile continue to question the rest with precise analysis.

    Posted on Saturday 05 March 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-03-01: Software Freedom Is Elementary, My Dear Watson.

    I've watched the game show, Jeopardy!, regularly since its Trebek-hosted relaunch on 1984-09-10. I even remember distinctly the Final Jeopardy question that night as This date is the first day of the new millennium. At the age of 11, I got the answer wrong, falling for the incorrect What is 2000-01-01?, but I recalled this memory eleven years ago during the debates regarding when the millennium turnover happened.

    I had periods of life where I watched Jeopardy! only rarely, but in recent years (as I've become more of a student of games (in part, because of poker)), I've watched Jeopardy! almost nightly over dinner with my wife. I've learned that I'm unlikely to excel as a Jeopardy! player myself because (a) I read slow and (b) my recall of facts, while reasonably strong, is not instantaneous. I thus haven't tried out for the show, but I'm nevertheless a fan of strong players.

    Jeopardy! isn't my only spectator game. Right after college, even though I'm a worse-than-mediocre chess player, I watched with excitement as Deep Blue played and defeated Kasparov. Kasparov has disputed the results and how much humans were actually involved, but even so, such interference was minimal (between matches) and the demonstration still showed computer algorithmic mastery of chess.

    Of course, the core algorithms that Deep Blue used were well known and often implemented. I learned α-β pruning in my undergraduate AI course and it was clear that a sufficiently fast computer, given a few strong heuristics, could beat most any full information game with a reasonable branching factor. And, computers typically do these days.

    I suppose I never really thought about the issues of Deep Blue being released as Free Software. First, because I was not as involved with Free Software then as I am now, and also, as near as anyone could tell, Deep Blue's software was probably not useful for anything other than playing chess, and its primary power was in its ability to go very deep (hence the name, I guess) in the search tree. In short, Deep Blue was primarily a hardware, not a software, success story.

    It was nevertheless, impressive, and last month, I saw the next installment in this IBM story. I watched with interest as IBM's Watson defeated two champion Jeopardy! players. Ken Jennings, for one, even welcomed our new computer overlords.

    Watson beating Jeopardy! is, frankly, a lot more innovative than Deep Blue beating chess. Most don't know this about me, but I came very close to focusing my career on PhD work in Natural Language Processing; I believe fundamentally it's the area of AI most in need of attention and research. Watson is a shining example of success in modern NLP, and I actually believe some of the IBM hype about how Watson's technology can be applied elsewhere, such as medical information systems. Indeed, IBM has announced a deal with Columbia University Medical Center to adapt the system for medical diagnostics. (Perhaps Watson's next TV appearance will be on House.)

    This all sounds great to most people, but to me, my real concern is the freedom of the software. We've shown in the software freedom community that to advance software and improve it, sharing the software is essential. Technology locked up in a vaulted cave doesn't allow all the great minds to collaborate. Just as we don't lock up libraries so that only the guilded overlords have access, nor should the best software technology be restricted in proprietariness.

    Indeed, Eric Brown, at his Linux Foundation End User Linux Summit talk, told us that Watson relied heavily on the publicly available software freedom codebase, such as GNU/Linux, Hadoop, and other FLOSS components. They clearly couldn't do their work without building upon the work we shared with IBM, yet IBM apparently ignores its moral obligation to reciprocate.

    So, I just point-blank asked Brown why Watson is proprietary. Of course, I long ago learned to never ask a confrontational question from the crowd at a technical talk without knowing what the answer is likely to be. Brown answered in the way I expected: We're working with Universities to provide a framework for their research. I followed up asking when he would actually release the sources and what license would be. He dodged the question, and instead speculated about what licenses IBM sometimes like to use when it does chose to release code; he did not indicate if Watson's sources will ever be released. In short, the answer from IBM is clear: Watson's general ideas will be shared with academics, but the source code won't be.

    This point is precisely one of the reasons I didn't pursue a career in academic Computer Science. Since most jobs — including professorships at Universities — for PhDs in Computer Science require that any code written be kept proprietary, most Computer Science researchers have convinced themselves that code doesn't matter; only publishing ideas do. This belief is so pervasive that I knew something like this would be Brown's response to my query. (I was even so sure, I wrote almost this entire blog post before I asked the question).

    I'd easily agree that publishing papers is better than the technology being only a trade secret. At least we can learn a little bit about the work. But in all but the pure theoretical areas of Computer Science, code is written to exemplify, test, and exercise the ideas. Merely publishing papers and not the code is akin to a chemist publishing final results but nothing about the methodologies or raw data. Science, in such cases, is unverifiable and unreproducible. If we accepted such in fields other than CS, we'd have accepted the idea that cold fusion was discovered in 1989.

    I don't think I'm going to convince IBM to release Watson's sources as Free Software. What I do hope is that perhaps this blog post convinces a few more people that we just shouldn't accept that Computer Science is advanced by researchers who give us flashy demos and code-less research papers. I, for one, welcome our computer overlords…but only if I can study and modify their source code.

    Posted on Tuesday 01 March 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2011-02-15: Everyone in USA: Comment against ACTA today!

    In the USA, the deadline for comments on ACTA is today (Tuesday 15 February 2011) at 17:00 US/Eastern. It's absolutely imperative that every USA citizen submit a comment on this. The Free Software Foundation has details on how to do so.

    ACTA is a dangerous international agreement that would establish additional criminal penalties, promulgate DMCA/EUCD-like legislation around the world, and otherwise extend copyright law into places it should not go. Copyright law is already much stronger than anyone needs.

    On a meta-point, it's extremely important that USA citizens participate in comment processes like this. The reason that things like ACTA can happen in the USA is because most of the citizens don't pay attention. By way of hyperbolic fantasy, imagine if every citizen of the USA wrote a letter today to Mr. McCoy about ACTA. It'd be a news story on all the major news networks tonight, and would probably be in the headlines in print/online news stories tomorrow. Our whole country would suddenly be debating whether or not we should have criminal penalties for copying TV shows, and whether breaking a DVD's DRM should be illegal.

    Obviously, that fantasy won't happen, but getting from where we are to that wonderful fantasy is actually linear; each person who writes to Mr. McCoy today makes a difference! Please take 15 minutes out of your day today and do so. It's the least you can do on this issue.

    The Free Software Foundation has a sample letter you can use if you don't have time to write your own. I wrote my own, giving some of my unique perspective, which I include below.

    The automated system on assigned this comment below the tracking number of 80bef9a1 (cool, it's in hex! :)

    Stanford K. McCoy
    Assistant U.S. Trade Representative for Intellectual Property and Innovation
    Office of the United States Trade Representative
    600 17th St NW
    Washington, DC 20006

    Re: ACTA Public Comments (Docket no. USTR-2010-0014)

    Dear Mr. McCoy:

    I am a USA citizen writing to urge that the USA not sign ACTA. Copyright law already reaches too far. ACTA would extend problematic, overly-broad copyright rules around the world and would increase the already inappropriate criminal penalties for copyright infringement here in the USA.

    Both individually and as an agent of my employer, I am regularly involved in copyright enforcement efforts to defend the Free Software license called the GNU General Public License (GPL). I therefore think my perspective can be uniquely contrasted with other copyright holders who support ACTA.

    Specifically, when engaging in copyright enforcement for the GPL, we treat it as purely a civil issue, not a criminal one. We have been successful in defending the rights of software authors in this regard without the need for criminal penalties for the rampant copyright infringement that we often encounter.

    I realize that many powerful corporate copyright holders wish to see criminal penalties for copyright infringement expanded. As someone who has worked in the area of copyright enforcement regularly for 12 years, I see absolutely no reason that any copyright infringement of any kind ever should be considered a criminal matter. Copyright holders who believe their rights have been infringed have the full power of civil law to defend their rights. Using the power of government to impose criminal penalties for copyright infringement is an inappropriate use of government to interfere in civil disputes between its citizens.

    Finally, ACTA would introduce new barriers for those of us trying to change our copyright law here in the USA. The USA should neither impose its desired copyright regime on other countries, nor should the USA bind itself in international agreements on an issue where its citizens are in great disagreement about correct policy.

    Thank you for considering my opinion, and please do not allow the USA to sign ACTA.

    Bradley M. Kuhn

    Posted on Tuesday 15 February 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2011-01-23: A Brief Tutorial on a Shared Git Repository

    A while ago, I set up Git for a group privately sharing the same central repository. Specifically, this is a tutorial for those who would want to have a Git setup that is a little bit like a SVN repository: a central repository that has all the branches that matter published there in one repository. I found this file today floating in a directory of “thing I should publish at some point”, so I decided just to put it up, as every time I came across this file, it reminded me I should put this up and it's really morally wrong (IMO) to keep generally useful technical information private, even when it's only laziness that's causing it.

    Before you read this, note that most developers don't use Git this way, particularly with the advent of shared hosting facilities like Gitorious, as systems like Gitorious solve the weirdness of problems that this tutorial addresses. When I originally wrote this (more than a year ago), the only well-known project that I found using a system like this was Samba; I haven't seen a lot of other projects that do this. Indeed, this process is not really what Git is designed to do, but sometimes groups that are used to SVN expect there to be a “canonical repository” that has all the contents of the shared work under one proverbial roof, and set up a “one true Git repository” for the project from which everyone clones.

    Thus, this tutorial is primarily targeted to a user mostly familiar with an SVN workflow, that has ssh access to that has a writable (usually by multiple people) Git repository living in the directory /git/REPOSITORY.git/.

    Ultimately, The stuff that I've documented herein is basically to fill in the gaps that I found when reading the following tutorials:

    So, here's my tutorial, FWIW. (I apologize that I make the mortal sin of tutorial writing: I drift wildly between second-person-singular, first-person-plural, and passive-voice third-person. If someone sends me a patch to the HTML file that fixes this, I'll fix it. :)

    Initial Setup

    Before you start using git, you should run these commands to let it know who you are so your info appears correctly in commit logs:

                     $ git config --global
                     $ git config --global “Your Real Name”

    Examining Your First Clone

    To get started, first we clone the repository:

                      $ git clone ssh://

    Now, note that Git almost always operates in the terms of branches. Unlike Subversion, Git's branches are first-class citizens and most operations in Git operate around a branch. The default branch is often called “master”, although I tend to avoid using the master branch for much, mainly because everyone who uses git has a different perception of what the master branch should embody. Therefore, giving all your branches more descriptive name is helpful. But, when you first import something into git, (for example, from existing Subversion trees), everything from Subversion's trunk is thrown on the master branch.

    So, we take a look at the result of that clone command. We have a new directory, called REPOSITORY, that contains a “working checkout&rquo; of the repository, and under that there is one special directory, REPOSITORY/.git/, which is a full copy of the repository. Note that this is not like Subversion, where what you have on your local machine is merely one view of the repository. With Git, you have a full copy of everything. However, an interesting thing has been done on your copy with the branches. You can take a look with these commands:

                      $ git branch
                      * master
                      $ git branch -r

    The first list of branches are the branches that are personal and local to you. (By default, git branch uses the -l option, which shows you only “local” branches; -r means “remote” branches. You can also use -a to see all of them.) Unless you take action to publish your local branches in some way, they will be your private area to work in and live only on your computer. (And be aware: they are not backed up unless you back them up!) The remote ones, that all start with “origin/” track the progress on the shared repository.

    (Note the term “origin” is a standard way of referring to “the repository from whence you cloned”, and origin/BRANCH refers to “BRANCH as it looks in the repository from whence you cloned”. However, there is nothing magical about the name “origin”. It's set up to DTRT in your WORKING-DIRECTORY/.git/config file, and the clone command set it all up for you, which is why you have them now.)

    Get to Work

    The canonical way to “get moving” with a new task in Git is to somehow create a branch for it. Branches are designed to be cheap and quick to create so that users will not be shy about creating a new one. Naming conventions are your own, but generally I like to call a branch USERNAME/TASK when I'm still not sure exactly what I'll be doing with it (i.e., who I will publish it to, etc.) You can always merge it back into another branch, or copy it to another branch (perhaps using a more formal name) later.

    Where do you Start Your Branch From?

    Once a repository exists, each branch in the repository comes from somewhere — it has a parent. These relationships help Git know how to easily merge branches together. So, the most typical procedure of starting a new branch of your own is to begin with an existing branch. The git checkout command is the easiest to use to start this:

                       git checkout -b USERNAME/feature origin/master

    In this example, we've created our own local branch, called USERNAME/feature, and it's started from the current state of origin/master. When you are getting started, you will probably usually want to always base your new branches off of ones that exist on the origin. This isn't a rule, it's just less confusing for a newbie if all your branches have a parent revision that live on the server.

    Now, it's important to note here that no branch stands still. It's best to think about a branch as a “moving pointer” to a linked list of some set of revisions in the repository.

    Every revision stored in git, local or remote, has a SHA1 which is computed based on the revisions before it plus new patch the revision just applied.

    Meanwhile, the only two substantive differences between one of these SHA1 identifiers and an actual branch is that (a) Git keeps changing what identifier the branch refers to as new commits come in (aka it moves the branch's HEAD), and (b) Git keeps track of the history of identifiers the branch previously referred to.

    So, above, when we asked git checkout to creat a new branch called USERNAME/feature based on origin/master, the two important things to realize are that (a) your new branch has its HEAD pointing at the same head that is currently the HEAD of origin/master, and (b) you got a new list to start adding revisions in the new branch.

    We didn't have to use branch for that. We could have simply started our branch from any old SHA1 of any revision. We happened to want to declare a relationship with the master branch on the server in this case, but we could have easily picked any SHA1 from our git log and used that one.

    Do Not Fear the checkout

    Every time you run a git checkout SOMETHING command, your entire working directory changes. This normally scares Subversion users; it certainly scared me the first time I used git checkout SOMETHING. But, the only reason it is scary is because svn switch, which is the roughly analogous command in the Subversion world, so often doesn't do something sane with your working copy. By contrast, switching branches and changing your whole working directory is a common occurrence with git.

    Note, however, that you cannot do git checkout with uncommitted changes in your directory (which, BTW, also makes it safer than svn switch). However, don't be too Subversion-user-like and therefore afraid to commit things. Remember, with Git (and unlike with Subversion), committing and publishing are two different operations. You can commit to your heart's content on local branches and merge or push into public branches later. (There are even commands to squash many commits into one before putting it on a public branch, in case you don't want people to see all the intermediate goofiness you might have done. This is why, BTW, many Git users commit as often as an SVN user would save in their editors.)

    However, if you must switch checkouts but really do fear making commits, there is a tool for you: look into git stash.

    Share with the Group

    Once you've been doing some work, you'll end up with some useful work finished on a USERNAME/feature branch. As noted before, this is your own private branch. You probably want to use the shared repository to make your work available to others.

    When using a shared Git repository, there are two ways to share your branches with your colleagues. The first procedure is when you simply want to publish directly on an existing branch. The second is when you wish to create your own branch.

    Publishing to Existing Branch

    You may choose to merge your work directly into a known branch on the remote repository. That's a viable option, certainly, but often you want to make it available on a separate branch for others to examine, even before you merge it into something like the master branch. We discuss the slightly more complicated new branch publication next, but for the moment, we can consider the quicker process of publishing to an existing branch.

    Let's consider when we have work on USERNAME/feature and we would like to make it available on the master branch. Make sure your USERNAME/feature branch is clean (i.e., all your changes are committed).

    The first thing you should verify is that you have what I call a “local tracking branch” (this is my own term that I made up, I think, you won't likely see it in other documentation) that is tied directly with the same name to the origin. This is not completely necessary, but is much more convenient to keep track of what you are doing. To check, do a:

                       $ git branch -a
                       * USERNAME/feature

    In the list, you should see both master and origin/master. If you don't have that, you should create it with:

                       $ git checkout -b master origin/master

    So, either way, you wan to be on the master branch. To get there if it already existed, you can run:

                       $ git checkout master

    And you should be able verify that you are now on master with:

                       $ git branch
                       * master

    Now, we're ready to merge in our changes:

                       $ git merge USERNAME/feature
                       Updating ded2fb3..9b1c0c9
                       Fast forward
                       FILE ...
                       N files changed, X insertions(+), Y deletions(-)

    If you don't get any message about conflicts, everything is fine. Your changes from USERNAME/feature are now on master. Next, we publish it to the shared repository:

                      $ git push
                      Counting objects: N, done.
                      Compressing objects: 100% (A/A), done.
                      Writing objects: 100% (A/A), XXX bytes, done.
                      Total G (delta T), reused 0 (delta 0)
                      refs/heads/master: IDENTIFIER_X -> IDENTIFIER_Y
                      To ssh://
                       X..Y  master -> master

    Your changes can now be seen by others when they git pull (See below for details).

    Publishing to a New Branch

    Suppose, what you wanted to instead of immediately putting the feature on the master branch, you wanted to simply mirror your personal feature branch to the rest of your colleagues so they can try it out before it officially becomes part of master. To do that, first, you need tell Git we want to make a new branch on the shared repository. In this case, you do have to use the git push command as well. (It is a catch-all command for any operations you want to do to the remote repository without actually logging into the server where the shared Git repository is hosted. Thus, Not surprisingly, nearly any git push commands you can think of will require you to be net.connected.)

    So, first let's create a local branch that has the actual name we want to use publicly. To do this, we'll just use the checkout command, because it's the most convenient and quick way to create a local branch from an already existing local branch:

                      $ git branch -l
                      * USERNAME/feature
                      $ git checkout -b proposed-feature USERNAME/feature
                      Switched to a new branch “proposed-feature”
                      $ git branch -l
                      * proposed-feature

    Now, again, we've only created this branch locally. We need an equivalent branch on the server, too. This is where git push comes in:

                      $ git push origin proposed-feature:refs/heads/proposed-feature

    Let's break that command down. The first argument for push is always “the place you are pushing to”. That can be any sort of git URL, including ssh://, http://, or git://. However, remember that the original clone operation set up this shorthand “origin” to refer to the place from whence we cloned. We'll use that shorthand here so we don't have to type out that big long URL.

    The second argument is a colon-separated item. The left hand side is the local branch we're pushing from on our local repository, and the right hand side is the branch we are pushing to on the remote repository.

    (BTW, I have no idea why refs/heads/ is necessary. It seems you should be able to say proposed-feature:proposed-feature and git would figure out what you mean. But, in the setups I've worked with, it doesn't usually work if you don't put in refs/heads/.)

    That operation will take a bit to run, but when it is done we see something like:

                      Counting objects: 35, done.
                      Compressing objects: 100% (31/31), done.
                      Writing objects: 100% (33/33), 9.44 MiB | 262 KiB/s, done.
                      Total 33 (delta 1), reused 27 (delta 0)
                      refs/heads/proposed-feature: 0000000000000000000000000000000000000000
                                                     -> CURRENT_HEAD_SHA1_SUM
                      To ssh://
                       * [new branch]      proposed-feature -> proposed-feature

    In older Git clients, you may not see that last line, and you won't get the origin/proposed-feature branch until you do a subsequent pull. I believe newer git clients do the pull automatically for you.

    Reconfiguring Your Client to see the New Remote Branch

    Annoyingly, as the creator of the branch, we have some extra config work to do to officially tell our repository copy that these two branches should be linked. Git didn't know from our single git push command that our repository's relationship with that remote branch was going to be a long term thing. To marry our local to origin/proposed-feature to a local branch, we must use the commands:

                      $ git config branch.proposed-feature.remote origin
                      $ git config branch.proposed-feature.merge refs/heads/proposed-feature

    We can see that this branch now exists because we find:

                      $ git branch -a
                      * proposed-feature

    After this is done, the remote repository has a proposed-feature branch and, locally, we have a proposed-feature branch that is a “local tracking branch” of origin/proposed-feature. Note that our USERNAME/feature, where all this stuff started from, is still around too, but can be deleted with:

                    git branch -d USERNAME/feature

    Finding It Elsewhere

    Meanwhile, someone else who has separately cloned the repository before we did this won't see these changes automatically, but a simple git pull command can get it:

                      $ git pull
                      remote: Generating pack...
                      remote: Done counting 35 objects.
                      remote: Result has 33 objects.
                      remote: Deltifying 33 objects...
                      remote:  100% (33/33) done
                      remote: Total 33 (delta 1), reused 27 (delta 0)
                      Unpacking objects: 100% (33/33), done.
                      From ssh://
                       * [new branch]      proposed-feature -> origin/proposed-feature
                      Already up-to-date.
                      $ git branch -a
                      * master

    However, their checkout directory won't be updated to show the changes until they make a local “mirror” branch to show them the changes. Usually, this would be done with:

                      $ git checkout -b proposed-feature origin/proposed-feature

    Then they'll have a working copy with all the data and a local branch to work on.

    BTW, if you want to try this yourself just to see how it works, you can always make another clone in some other director just to play with, by doing something like:

                      $ git clone ssh:// \

    Now on this secondary checkout (which makes you just like the user who is not the creator of the new branch), work can be pushed and pulled on that branch easily. Namely, anything you merge into or commit on your local proposed-feature branch will automatically be pushed to origin/proposed-feature on the server when you git push. And, anything that shows up from other users on the origin/proposed-feature branch will show up when you do a git pull. These two branches were paired together from the start.

    Irrational Rebased Fears

    When using a shared repository like this, it's generally the case that git rebase usually screws something up. When Git is used in the “normal way”, rebase is one of the amazing things about Git. The rebase idea is: you unwind the entire work you've done on one of your local branches, bringing in changes that other people have made in the meantime, and then reapply your changes on top of them.

    It works out great when you use Git the way the Linux Project does. However, if you use a single, shared repository in a work group, rebase can be dangerous.

    Generally speaking, though, with a shared repository, you can use git merge and won't need rebasing. My usual work flow is that I get started on a feature with:

                      $ git checkout -b bkuhn/new-feature starting-branch

    I work work work away on it. Then, when it's ready, I send a patch around to a mailing list that I generate with:

                      $ git diff $(git merge-base starting-branch bkuhn/new-feature) bkuhn/new-feature

    Note that the thing in the $() returns a single identifier for a version, namely, the version of the fork point between starting-branch and bkuhn/new-feature. Therefore, the diff output is just the stuff I've actually changed. This generates all the differences between the place where I forked and my current work.

    Once I have discussed and decided with my co-developers that we like what I've done, I do this:

                      $ git checkout starting-branch
                      $ git merge bkuhn/new-feature

    If all went well, this should automatically commit my feature into starting-branch. Usually, there is also an origin/starting-branch, which I've probably set up for automatic push/pull with my local starting-branch, so I then can make the change officially by running:

                      $ git push

    The fact that I avoid rebase is probably merely FUD, and if I learned more, I could use it safely in cases with shared repository. But I have no advice on how to make it work. In particular, this Git FAQ entry shows quite clearly that my work sequence ceases to work all that well when you do a rebase — namely, doing a git push becomes more complicated.

    I am sure a rebase would easily become very necessary if I lived on bkuhn/new-feature for a long time and there had been tons of changes underneath me, but I generally try not to dive to deep into a fork, although many people love DVCS because they can do just that. YMMV, etc.

    Posted on Sunday 23 January 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-01-18: Free as in Freedom, Episode 0x07

    I realized that I should start regularly noting here on my blog when the oggcast that I co-host with Karen Sandler is released. There are perhaps folks who want content from my blog but haven't subscribed to the RSS feed of the show, and thus might want to know when new episodes come out. If this annoys people reading this blog, please let me know via email or identica.

    In particular, perhaps readers won't like that, in these posts (which are going to be written after the show), I'm likely to drift off into topics beyond what was talked about on the show, and there may be “spoilers” for the oggcast in them. Again, if this annoys you (or if you like it) please let me know.

    Today's FaiF episode is entitled Revoked?. The main issue of discussion is some recent confusions about the GPLv2 release of WinMTR. I was quoted in an article about the topic as well, and in the oggcast we discuss this issue at length.

    To summarize my primary point in the oggcast: I'm often troubled when these issues come up, because I've seen these types of confusions so many times before in the last decade. (I've seen this particular one, almost exactly like this, at least five times.) I believe that those of us who focus on policy issues in software freedom need to do a better job documenting these sorts of issues.

    Meanwhile, after we recorded the show I was thinking again about how Karen points out in the oggcast that the primary issues are legal ones. I don't really agree with that. These are policy questions, that are perhaps informed by legal analysis, and it's policy folks (and, specifically, Free Software project leaders) that should be guiding the discussion, not necessarily lawyers.

    That's not to say that lawyers can't be policy folks as well; I actually think Karen and a few other lawyers I know are both. The problem is that if we simply take things like GPL on their face — as if they are unchanging laws of nature that simply need to be interpreted — we miss out on the fact that licenses, too, can have bugs and can fail to work the way that they should. A lawyer's job is typically to look at a license, or a law, or something more or less fixed in its existence and explain how it works, and perhaps argue for a particular position of how it should be understood.

    In our community, activists and project leaders who set (or influence) policy should take such interpretations as input, and output plans to either change the licenses and interpretation to make sure they properly match the goals of software freedom, or to build up standards and practices that work within the existing licensing and legal structure to advance the goal of building a world where all published software is Free Software.

    So, those are a few thoughts I had after recording; be sure to listen to FaiF 0x07 available in ogg and mp3 formats.

    Posted on Tuesday 18 January 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2011-01-02: Conservancy Activity Summary, 2010-10-01 to 2010-12-31

    [ Crossposted from Conservancy's blog. ]

    I had hoped to blog more regularly about my work at Conservancy, and hopefully I'll do better in the coming year. But now seems a good time to summarize what has happened with Conservancy since I started my full-time volunteer stint as Executive Director from 2010-10-01 until 2010-12-31.

    New Members

    We excitedly announced in the last few months two new Conservancy member projects, PyPy and Git. Thinking of PyPy connects me back to my roots in Computer Science: in graduate school, I focused on research about programming language infrastructure and, in particular, virtual machines and language runtimes. PyPy is a project that connects Conservancy to lots of exciting programming language research work of that nature, and I'm glad they've joined.

    For its part, Git rounds out a group of three DVCS projects that are now Conservancy members; Conservancy is now the home of Darcs, Git, and Mercurial. Amusingly, when I reminded the Git developers when they applied that their “competition” were members, the Git developers told me that they were inspired to apply because these other DVCS' had been happy in Conservancy. That's a reminder that the software freedom community remains a place where projects — even that might seem on the surface as competitors — seek to get along and work together whenever possible. I'm glad Conservancy now hosts all these projects together.

    Meanwhile, I remain in active discussions with five projects that have been offered membership in Conservancy. As I always tell new projects, joining Conservancy is a big step for a project, so it often takes time for communities to discuss the details of Conservancy's Fiscal Sponsorship Agreement. It may be some time before these five projects join, and perhaps they'll ultimately decide not to join. However, I'll continue to help them make the right decision for their project, even if joining a different fiscal sponsor (or not joining one at all) is the ultimately right choice.

    Also, about once every two weeks, another inquiry about joining Conservancy comes in. We won't be able to accept all the projects that are interested, but hopefully many can become members of Conservancy.

    Annual Filings

    In the late fall, I finished up Conservancy's 2010 filings. Annual filings for a non-profit can be an administrative rat-hole at times, but the level of transparency they create for an organization makes them worth it. Conservancy's FY 2009 Federal Form 990 and FY 2009 New York CHAR-500 are up on Conservancy's filing page. I always make the filings available on our own website; I wish other non-profits would do this. It's so annoying to have to go to a third-party source to grab these documents. (Although New York State, to its credit, makes all the NY NPO filings available on its website.)

    Conservancy filed a Form 990-EZ in FY 2009. If you take a look, I'd encourage you to direct the most attention to Part III (which is on the top of page 2) to see most of Conservancy's program activities between 2008-03-01 to 2009-02-28.

    In FY 2010, Conservancy will move from the New York State requirement of “limited financial review” to “full audit“ (see page 4 of the CHAR-500 for the level requirements). Conservancy had so little funds in FY 2007 that it wasn't required to file a Form 990 at all. Now, just three years later, there is enough revenue to warrant a full audit. However, I've already begun preparing myself for all the administrative work that will entail.

    Project Growth and Funding

    Those increases in revenue are related to growth in many of Conservancy's projects. 2010 marked the beginning of the first full-time funding of a developer by Conservancy. Specifically, since June, Matt Mackall has been funded through directed donations to Conservancy to work full-time on Mercurial. Matt blogs once a month (under topic of Mercurial Fellowship Update) about his work, but, more directly, the hundreds of changesets that Matt's committed really show the advantages of funding projects through Conservancy.

    Conservancy is also collecting donations and managing funding for various part-time development initiatives by many developers. Developers of jQuery, Sugar Labs, and Twisted have all recently received regular development funding through Conservancy. An important part of my job is making sure these developers receive funding and report the work clearly and fully to the community of donors (and the general public) that fund this work.

    But, as usual with Conservancy, it's handling of the “many little things” for projects that make a big difference and sometimes takes the most time. In late 2010, Conservancy handled funding for Code Sprints and conferences for the Mercurial, Darcs, and jQuery. In addition, jQuery held a conference in Boston in October, for which Conservancy handled all the financial details. I was fortunate to be able to attend the conference and meet many of the jQuery developers in person for the first time. Wine also held their annual conference in November 2010, and Conservancy handled the venue details and reimbursements to many of travelers to the conference.

    Also, as always, Conservancy project contributors regularly attend other conferences related to their projects. At least a few times a month, Conservancy reimburses developers for travel to speak and attend important conferences related to their projects.

    Google Summer of Code

    Since its inception, Google's Summer of Code (SoC) program has been one of the most important philanthropy programs for Open Source and Free Software projects. In 2010, eight Conservancy projects (and 5% of the entire SoC program) participated in SoC. The SoC program funds college students for the summer to contribute to the projects, and an experienced contributor to project mentors each student. A $500 stipend is paid to the non-profit organization of the project for each project contributor who mentors a student.

    Furthermore, there's an annual conference, in October, of all the mentors, with travel funded by Google. This is a really valuable conference, since it's one of the few places where very disparate Free Software projects that usually wouldn't interact can meet up in one place. I attended this year's Soc Mentor Summit and hope to attend again next year.

    I'm really going to be urging all Conservancy's projects to take advantage of the SoC program in 2011. The level of funding given out by Google for this program is higher than any other open-application funding program for FLOSS. While Google's selfish motives are clear (the program presumably helps them recruit young programmers to hire), the benefit to Free Software community of the program can nevertheless not be ignored.

    GPL Enforcement

    GPL Enforcement, primarily for our BusyBox member project, remains an active focus of Conservancy. Work regarding the lawsuit continues. It's been more than a year since Conservancy filed a lawsuit against fourteen defendants who manufacture embedded devices that included BusyBox without source nor an offer for source. Some of those have come into compliance with the GPL and settled, but a number remain and are out of compliance; our litigation efforts continue. Usually, our lawyers encourage us not to comment on ongoing litigation, but we did put up a news item in August when the Court granted Conservancy a default judgment against one of the defendants, Westinghouse.

    Meanwhile, in the coming year, Conservancy hopes to expand efforts to enforce the GPL. New violation reports on BusyBox arrive almost daily that need attention.

    More Frequent Blogging

    As noted at the start of this post, my hope is to update Conservancy's blog more regularly with information about our activities.

    This blog post was covered on LWN and on

    Posted on Sunday 02 January 2011 by Bradley M. Kuhn.

    Comment on this post in this conversation.



  • 2010-11-16: In Defense of Bacon

    Jono Bacon is currently being criticized for the manner in which he launched an initiative called OpenRespect.Org. Much of this criticism is unfair, and I decided to write briefly here in support of Jono, because he's a victim of a type of mistreatment that I've experienced myself, so I have particularly strong empathy for his situation.

    To be clear, I'm not even a supporter of Jono's OpenRespect.Org initiative myself. I think there are others who are doing good work in this area already (for example, various efforts around getting women involved in Free Software have long recognized and worked on the issue, since mutual respect is an essential part having a more diverse community). Also, I felt that Jono's initiative was slanted toward encouraging people respect all actions by companies, some of which don't advance Free Software. I commented on Jono's blog to share my criticisms of the initiative when he was still formulating it. In short, I think the wording of the current statement on seems to indicate people should accept anyone else's choice as equally moral. As someone who believes software freedom as a moral issue, and thus view development and distribution of proprietary software as an immoral act, I have a problem with such a mandate, although I nevertheless strive to be respectful in pursuit of that view. I would hate to be declared disrespectful merely because I believe in the morality of software freedom.

    Yet, despite the fact that I disagree with some of the details of Jono's initiative, I believe most of the criticisms have been unfair. First and foremost, we should take Jono at his word that this initiative is his own and not one undertaken on behalf of Canonical, Ltd. I doubt Jono would dispute that his work at Canonical, Ltd. inspired him to think about these issues, but that doesn't mean that everything he does on his own time on his own website is a Canonical, Ltd. activity.

    Indeed, I've personally been similarly attacked for items I've said on this blog of my own, which of course does not represent the views of any of my employers (past nor present) nor any organizations with which I have volunteer affiliations. When I have things to say on those topics, I have other fora to post officially, as does Jono.

    So, I've experienced first-hand what Jono is currently experiencing: namely, that people ignore disclaimers precisely to attack someone who has an opinion that they don't like. By conflating your personal opinions with those of your employer's, people subtly discredit you — for example, by using your employment relationship to put inappropriate pressure on you to change your positions. I'm very sad to see that this same thing I've been a victim of is now happening to Jono, too. I couldn't just watch it happen without making a statement of solidarity and pointing out that such treatment is unfair.

    Even if we don't agree with the initiative (and I don't, for reasons stated above), there is no one to blame but Jono himself, as he's told us clearly this isn't a Canonical initiative, and I've seen no evidence that shows the situation is otherwise.

    I do note that there are other criticisms raised, such as whether or not Jono reached out in the best possible way to others during the launch, or whether others thought they'd be involved when it turned out to be a unilateral initiative. All of that, of course, is something that's reparable (as is my primary complaint above, too), so on those fronts, we should just give our criticism and ask Jono to change it. That's what I did on my issue. He chose not to take my advice, which is his prerogative. My response thereafter was simply to not support the initiative.

    To the extent we don't have enough respect in the FLOSS community, here's an easy place to improve: we should take people at their word until we have evidence to believe otherwise. Jono says is his own thing; we should believe him. We shouldn't insist that everything someone says is on behalf of their employer, even if they have a spokesperson role. People have a right to be something more than automatons for their bosses.

    Disclosure: I did not tell Jono I was going to write this post, but after it was completely written, I gave him the chance to make a binary decision about whether I posted it publicly or not. Since you're reading this, he obviously answered 1.

    Posted on Tuesday 16 November 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-11-15: Comments on Perens' Comments on Software Patents

    Bruce Perens and I often disagree about lots of things. However, I urge everyone to read what Bruce wrote this weekend about software patents. I'm very glad he's looking deep into recent events surrounding this issue; I haven't had the time to do so myself because I've been so busy with the launch of my full-time work at Conservancy this fall.

    Despite my current focus on getting Conservancy ramped up with staff, so it can do more of its work, I nevertheless still remain frightfully concerned about the impact of software patents on the future of software freedom, and I support any activities that seek to make sure that software patent threats do not stand in the way of software freedom. Bruce and I have always agreed about this issue: software patents should end, and while individuals with limited means can't easily make that happen themselves, we must all work to raise awareness and public opinion against all patenting of software.

    Specifically, I'm really glad that Bruce has mentioned the issue of lobbying against software patents. Post-Bilski, it's become obvious that software patents can only be ended with legislative change. In the USA, sadly, the only way to do this effectively is through lobbying. Therefore, I've called on businesses (such as Google and Red Hat), that have been targets of software patent litigation, to fund lobbying efforts to end software patents; such funding would simultaneously help themselves as well as software freedom. Unfortunately, as far as I'm aware, no companies have stepped forward to fund such an effort, and they instead seem to spend their patent-related resources on getting more software patents of their own. Meanwhile, individual, not-for-profit Free Software developers simply don't have the resources to do this lobbying work ourselves.

    Nevertheless, there are still a few things individual developers can do in the meantime against software patents. I wrote a complete list of suggestions after Bilski; I just reread it and confirmed all of the suggestions listed there are still useful.

    Posted on Monday 15 November 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-10-20: Open Letter: Adopt RMS' CAA/CLA Suggested Texts

    I was glad to read today that Sam Varghese is reporting that Mark Shuttleworth doesn't want Canonical, Ltd. to engage in business models that abuse proprietary relicensing powers in a negative way. I wrote below a brief open letter to Mark for him to read when he returns from UDS (since the article said he would handle this in detail upon his return from there). It's fortunate that there is a simple test to see if Mark's words are a genuine commitment for change by Canonical, Ltd. There's a simple action he can take to show if means to follow through on his statement:

    Dear Mark,

    I was glad to read today that you have no plans to abuse the powers of proprietary relicensing that Canonical, Ltd's. CAAs/CLAs give you. As you are hopefully already aware, Richard Stallman published a few suggested texts to use if you are attempting to only consider benign business models as part of your CAA/CLA process. Since you've committed to that, I would expect you'd be ready, willing and able to adopt those immediately for Canonical, Ltd.'s, CLAs and CAAs. When will you do so?

    Thanks very much for taking my criticisms seriously and I look forward to seeing this change soon in Canonical, Ltd.'s CAAs and/or CLAs.

    Posted on Wednesday 20 October 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-10-19: Does “Open Core” Actually Differ from Proprietary Relicensing?

    I've been criticized — quite a bit this week, but before that too — for using the term “Open Core” as a shortcut for the phrase “proprietary relicensing0 that harms software freedom”. Meanwhile, Matt Aslett points to Andrew Lampitt's “Open Core” definition as canonical. I admit I wasn't aware of Lampitt's definition before, but I dutifully read it when Aslett linked to it, and I quote it here:

    [Lampitt] propose[s] the following for the Open Core Licensing business model:
    • core is GPL: if you embed the GPL in closed source, you pay a fee
    • technical support of GPL product may be offered for a fee (up for debate as to whether it must be offered)
    • annual commercial subscription includes: indemnity, technical support, and additional features and/or platform support. (Additional commercial features having viewable or closed source, becoming GPL after timebomb period are both up for debate).
    • professional services and training are for a fee.

    The amusing fact about this definition is that half the things on it (i.e., technical support, services/training, indemnity, tech support) can be part of any FLOSS business model and do not require the offering company to hold the exclusive right of proprietary relicensing. Meanwhile, the rest of the items on the list are definitely part of what was traditionally called the “proprietary relicensing business“ dating back to the late 1990s: namely, customers can buy their way out of GPL obligations, and a single company can exclusively offer proprietary add-ons. For example, this is precisely what Ximian did with their Microsoft Exchange Connector for Evolution, which predated the first use of the term “Open Core” by nearly a decade. Cygnus also used this model for Cygwin, which has unfortunately continued at Red Hat (although Richard Fontana of Red Hat wants to end the copyright assignment of Cygwin).

    In my opinion, mass terminology confusion exists on this point simply because there is a spectrum1 of behaviors that are all under the banner of “proprietary relicensing”. Moreover, these behaviors get progressively worse for software freedom as you continue down the spectrum. Nearly the entire spectrum consists of activities that are harmful to software freedom (to varying degrees), but the spectrum does begin with a practice that is barely legitimate.

    That practice is one that RMS' himself began calling barely legitimate in the early 2000s. RMS specifically and carefully coined his own term for it: selling exceptions to the GPL. This practice is a form of proprietary relicensing that never permits the seller to create their own proprietary fork of the code and always releases all improvements done by the sole proprietary licensee itself to the general public. If this practice is barely legitimate, it stands to reason that anything that goes even just a little bit further crosses the line into illegitimacy.

    From that perspective, I view this spectrum of proprietary relicensing thusly: on the narrow benign end of the spectrum we find what RMS calls “exception selling” and on the other end, we find GPL'd demoware that is merely functional enough to convince customers to call up the company to ask to buy more. Everything beyond “selling exceptions” in harmful to software freedom, getting progressively more harmful as you move further down the spectrum. Also, notwithstanding Lampitt's purportedly canonical definition, “Open Core” doesn't really have a well-defined meaning. The best we can say is that “Open Core” must be something beyond “selling exceptions” and therefore lives somewhere outside of the benign areas of “proprietary relicensing”. So, from my point of view, it's not a question of whether or not “Open Core” is a benign use of GPL: it clearly isn't. The only question to be asked is: how bad is it for software freedom, a little or a lot? Furthermore, I don't really care that much how far a company gets into “proprietary relicensing”, because I believe it's already likely to be harmful to software freedom. Thus, focusing debate only on how bad is it? seems to be missing the primary point: we should shun nearly all proprietary relicensing models entirely.

    Furthermore, I believe that once a company starts down the path of this proprietary relicensing spectrum, it becomes a slippery slope. I have never seen the benign “exception selling” last for very long in practice. Perhaps a truly ethical company might stick to the principle, and would thus use an additional promise-back as RMS' suggests to prove to the community they will never veer from it. RMS' suggested texts have only been available for less than a month, so more time is needed to see if they are actually adopted. Of course, I call on any company asking for a CLA and/or CAA to adopt RMS' texts, and I will laud any company that does.

    But, pragmatically, I admit I'll be (pleasantly) surprised if most CAA/CLA-requesting companies come forward to adopt RMS' suggested texts. We have a long historical list of examples of for-profit corporate CAAs and CLAs being used for more nefarious purposes than selling exceptions, even when that wasn't the original intent. For example2, When MySQL AB switched to GPL, they started benignly selling exceptions, but, by the end of their reign, part of their marketing was telling potential “customers” that they'd violated the GPL even when they hadn't — merely to manipulate the customer into buying a proprietary license. Ximian initially had no plans to make proprietary add-ons to Evolution, but nevertheless made use of their copyright assignment to make the Microsoft Exchange Connector. Sourceforge, Inc. (named VA Linux at the time) even went so far as to demand copyright assignments on the Sourceforge code after the fact (writing out changes by developers who refused) so they could move to an “Open Core”-style business model. (Ultimately, became merely demoware for a proprietary product.)

    In short, handing over copyright assignment to a company gives that company a lot of power, and it's naïve to believe a for-profit company won't use every ounce of that power to make a buck when it's not turning a profit otherwise. Non-profit assignors, for their part, mitigate the situation by making firm promises back regarding what will and won't be done with the code, and also (usually) have well-defined non-profit missions that prevent them from moving in troubling directions. For profit companies don't usually have either.

    Without strong assurances in the agreement, like the ones RMS suggests, individual developers simply must assume the worst when assigning copyright and/or giving a broad CLA to a for-profit company. Whether we can ever determine what is or is not “Open Core”, history shows us that for-profit companies with exclusive proprietary relicensing power eventually move away from the (extremely narrow) benign end of the proprietary relicensing spectrum.

    0Most pundits will prefer the term “dual licensing” for what I call “proprietary relicensing”. I urge avoidance of the term “dual licensing”. “Dual licensing” also has a completely orthogonal denotative usage: a Free Software license that has two branches, like jQuery's license of (GPLv2-or-later|MIT). That terminology usage was quite common before even the first “proprietary relicensing” business model was dreamed of, and therefore it only creates confusion to overload that term further.

    1BTW, Lampitt does deserve some credit here. His August 2008 post hints at this spectrum idea of proprietary licensing models. His post doesn't consider the software-freedom implications of the various types, but it seems to me that post was likely ahead of its time for two years ago, and I wish I'd seen it sooner.

    2I give here just of a few of the many examples, which actually name names. Although he doesn't name names, Michael Meeks, in his Some Thoughts on Copyright Assignment, gives quite a good laundry list of all the software-freedom-unfriendly things that have historically happened in situations where CAA/CLAs without adequate promises back were used.

    Posted on Tuesday 19 October 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-10-17: Canonical, Ltd. Finally On Record: Seeking Open Core

    I've written before about my deep skepticism regarding the true motives of Canonical, Ltd.'s advocacy and demand of for-profit corporate copyright assignment without promises to adhere to copyleft. I've often asked Canonical employees, including Jono Bacon, Amanda Brock, Jane Silber, Mark Shuttleworth himself, and — in the comments of this very blog postMatt Asay to explain (a) why exactly they demand copyright assignment on their projects, rather than merely having contributors agree to the GNU GPL formally (like projects such as Linux do), and (b) why, having received a contributor's copyright assignment, Canonical, Ltd. refuses to promise to keep the software copylefted and never proprietarize it (FSF, for example, has always done the latter in assignments). When I ask these questions of Canonical, Ltd. employees, they invariably artfully change the subject.

    I've actually been asking these questions for at least a year and a half, but I really began to get worried earlier this year when Mark Shuttleworth falsely claimed that Canonical, Ltd.'s copyright assignment was no different than the FSF's copyright assignment. That event made it clear to me that there was a job of salesmanship going on: Canonical, Ltd. was trying to sell something to community that the community doesn't want nor need, and trying to reuse the good name of other people and organizations to do it.

    Since that interview in February, Canonical, Ltd. has launched a manipulatively named product called “Project Harmony”. They market this product as a “summit” of sorts — purported to have no determined agenda other than to discuss the issue of contributor agreements and copyright assignment, and come to a community consensus on this. Their goal, however, was merely to get community members to lend their good names to the process. Indeed, Canonical, Ltd. has oft attempted to use the involvement of good people to make it seem as if Canonical, Ltd.'s agenda is endorsed by many. In fact, FSF recently distanced itself from the process because of Canonical, Ltd.'s actions in this regard. Simon Phipps had similarly distanced himself before that.

    Nevertheless, it seems Canonical, Ltd. now believes that they've succeed in their sales job, because they've now confessed their true motive. In an IRC Q&A session last Thursday0, Shuttleworth finally admits that his goal is to increase the amount of “Open Core” activity. Specifically, Shuttleworth says at 15:21 (and following):

    [C]ompare Qt and Gtk, Qt has a contribution agreement, Gtk doesn't, for a while, back in the bubble, Sun, Red Hat, Ximian and many other companies threw money at Gtk and it grew and improved very quickly but, then they lost interest, and it has stagnated. Qt was owned by Trolltech it was open source (GPL) but because of the contribution agreement they had many options including proprietary licensing, which is just fine with me alongside the GPL and later, because they owned Qt completely, they were an attractive acquisition for Nokia, All in all, the Qt ecosystem has benefitted and the Gtk ecosystem hasn't.

    It takes some careful analysis to parse what's going on here. First of all, Shuttleworth is glossing over a lot of complicated Qt history. Qt started with a non-FaiF license (QPL), which later became a GPL-incompatible Free Software license. After a few years of this oddball, license-proliferation-style software freedom license, Trolltech stumbled upon the “Open Core” model (likely inspired by MySQL AB), and switched to GPL. When Nokia bought Trolltech, Nokia itself discovered that full-on “Open Core” was bad for the code base, and (as I heralded at the time) relicensed the codebase to LGPL (the same license used by Gtk). A few months after that, Nokia abandoned copyright assignment completely for Qt as well! (I.e., Shuttleworth is just wrong on this point entirely.) In fact, Shuttleworth, rather than supporting his pro-Open-Core argument, actually gave the prime example of Nokia/TrollTech's lesson learned: “don't do an Open-Core-style contributor agreement, you'll regret it”. (RMS also recently published a good essay on this subject).

    Furthermore, Shuttleworth also ignores completely plenty of historical angst in communities that rely on Qt, which often had difficulty getting bugfixes upstream and other such challenges when dealing with a for-profit controlled “Open Core” library. (These were, in fact, among the reasons Nokia gave in May 2009 for the change in policy). Indeed, if the proprietary relicensing business is what made Trolltech such a lucrative acquisition for Nokia, why did they abandon the business model entirely within four months of the acquisition?

    Although, Shuttleworth's “lucrative acquisition” point has some validity. Namely, “Open Core” makes wealthy, profit-driven types (e.g., VCs) drool. Meanwhile, people like me, Simon Phipps, NASA's Chris Kemp, John Mark Walker, Tarus Balog and many others are either very skeptical about “Open Core”, or dead-set against it. The reason it's meeting with so much opposition is because “Open Core” is a VC-friendly way to control all the copyright “assets” while pretending to actually have the goal of building an Open Source community. The real goal of “Open Core”, of course, is a bait-and-switch move. (Details on that are beyond the scope of this post and well covered in the links I've given.)

    As to Shuttleworth's argument of Gtk stagnation, after my trip this past summer to GUADEC, I'm quite convinced that the GNOME community is extremely healthy. Indeed, as Dave Neary's GNOME Census shows, the GNOME codebases are well-contributed to by various corporate entities and (more importantly) volunteers. For-profit corporate folks like Shuttleworth and his executives tend not to like communities where a non-profit (in this case, the GNOME Foundation) shepherds a project and keeps the multiple for-profit interests at bay. In fact, he dislikes this so much that when GNOME was recently documenting its long standing copyright policies, he sent Silber to the GNOME Advisory Board (the first and only time Canonical, Ltd. sent such a high profile person to the Advisory Board) to argue against the long-standing GNOME community preference for no copyright assignment on its projects1. Silber's primary argument was that it was unreasonable for individual contributors to even ask to keep their own copyrights, since Canonical, Ltd. puts in the bulk of the work on their projects that require copyright assignment. Her argument was, in other words, an anti-software-freedom equality argument: a for-profit company is more valuable to the community than the individual contributor. Fortunately, GNOME Foundation didn't fall for this, continued its work with Intel to get the Clutter codebase free of copyright assignment (and that work has since succeeded). It's also particularly ironic that, a few months later, Neary showed that the very company making that argument contributes 22% less to the GNOME codebase than the volunteers Silber once argued don't contribute enough to warrant keeping their copyrights.

    So, why have Shuttleworth and his staff been on a year-long campaign to convince everyone to embrace “Open Core” and give up all their rights that copyleft provides? Well, in the same IRC log (at 15:15) I quoted above, Shuttleworth admits that he has some work left to do to make Canonical, Ltd. profitable. And therein lies the connection: Shuttleworth admits Canonical, Ltd.'s profitability is a major goal (which is probably obvious). Then, in his next answer, he explains at great length how lucrative and important “Open Core” is. We should accept “Open Core”, Shuttleworth argues, merely because it's so important that Canonical, Ltd. be profitable.

    Shuttleworth's argument reminds me of a story that Michael Moore (who famously made the documentary Roger and Me, and has since made other documentaries) told at a book-signing in the mid-1990s. Moore said (I'm paraphrasing from memory here, BTW):

    Inevitably, I end up on planes next to some corporate executive. They look at me a few times, and then say: Hey, I know you, you're Roger Moore [audience laughs]. What I want to know, is what the hell have you got against profit? What's wrong with profit, anyway? The answer I give is simple: There's nothing wrong with profit at all. The question I'm raising is: What lengths are acceptable to achieve profit? We all agree that we can't exploit child labor and other such things, even if that helps profitability. Yet, once upon a time, these sorts of horrible policies were acceptable for corporations. So, my point is that we still need more changes to balance the push for profit with what's right for workers.

    I quote this at length to make it abundantly clear: I'm not opposed to Canonical, Ltd. making a profit by supporting software freedom. I'm glad that Shuttleworth has contributed a non-trivial part of his personal wealth to start a company that employs many excellent FLOSS developers (and even sometimes lets those developers work on upstream projects). But the question really is: Are the values of software freedom worth giving up merely to make Canonical, Ltd. profitable? Should we just accept that proprietary network services like UbuntuOne, integrated on nearly every menu of the desktop, as reasonable merely because it might help Canonical, Ltd. make a few bucks? Do we think we should abandon copyleft's assurances of fair treatment to all, and hand over full proprietarization powers on GPL'd software to for-profit companies, merely so they can employ a few FLOSS developers to work primarily on non-upstream projects?

    I don't think so. I'm often critical of Red Hat, but one thing they do get right in this regard is a healthy encouragement of their developers to start, contribute to, and maintain upstream projects that live in the community rather than inside Red Hat. Red Hat currently allows its engineers to keep their own copyrights and license them under whatever license the upstream project uses, binding them to the terms of the copyleft licenses (when the upstream project is copylefted). For projects generated inside Red Hat, after experimenting with the sorts of CLAs that I'm complaining about, they learned from the mistake and corrected it (although unfortunately, Red Hat hasn't universally corrected the problem). For the most part, Red Hat encourages outside contributors to give under their own copyright under the outbound license Red Hat chose for its projects (some of which are also copylefted). Red Hat's newer policies have some flaws (details of which are beyond the scope of this post), but it's orders of magnitude better than the copyright assignment intimidation tactics that other companies, like Canonical, Ltd., now employ.

    So, don't let a friendly name like “Harmony” fool you. Our community has some key infrastructure, such as the copyleft itself, that actually keeps us harmonious. Contributor agreements aren't created equal, and therefore we should oppose the idea that contributor and assignment agreements should be set to the lowest common denominator to enable a for-profit corporate land-grab that Shuttleworth and other “Open Core” proponents seek. I also strongly advise the organizations and individuals who are assisting Canonical, Ltd. in this goal to stop immediately, particularly now that Shuttleworth has announced his “Open Core” plans.

    Update (2010-10-18): In comments, many people have, quite correctly, argued that I have not proved that Canonical, Ltd. has plans to go “Open Core” with their copyright-assigned copyleft products. Such comments are correct; I intended this article to be an opinion piece, not a logical proof. I further agree that without absolute proof, the title of this blog post is an exaggeration. (I didn't change it, as that seemed disingenuous after the fact).

    Anyway, to be clear, the only thing the chain of events described above prove is that Canonical, Ltd. wants “Open Core” as a possibility for the future. That part is trivially true: if they didn't want to reserve the possibility, they'd simply make a promise-back to keep the software as Free Software in their assignment. The only reason not to make an FSF-style promise-back is that you want to reserve the possibility of proprietary relicensing.

    Meanwhile, even though I cannot construct a logical proof of it, I still believe the only possible explanation for this 1+ year marketing campaign described above is that Canonical, Ltd. is moving toward “Open Core” for those projects on which they are the sole copyright holder. I have asked others to offer alternative explanations of why Canonical, Ltd. is carrying out this campaign: I agree that there could exist another logical explanation other than the one I've presented. If someone can come up with one, then I would be happy to link to it here.

    Finally, if Canonical, Ltd. comes out with a statement that they'll switch to using FSF's promise-back in their assignments, I will be very happy to admit I was wrong. The outcome I want is for individual developers to be treated right by corporations in control of particular codebases; I would much rather that happen than be correct in my opinions.

    0I originally credited OMG Ubuntu as publishing Shutleworth's comments as an interview. Their reformatting of his comments temporarily confused me, and I thought they'd done an interview. Thanks to @gotunandan who pointed this out.

    1Ironically, the debate had nothing to do with a Canonical, Ltd. codebase, since their contributions amount to so little (1%) of the GNOME codebase anyway. The debate was about the Clutter/Intel situation, which has since been resolved.

    Responses Not In the Identica Thread:

    • Alex Hudson's blog post
    • Discussion on Hacker News
    • LWN comments
    • Matt Aslett's response and my response to him
    • Ingolf Schaefer's blog post, which only allows comments with a Google Account, so I comment below instead (to be clear, I'm not criticizing Ingolf's choice of Google-account-to-comment, especially since I make everyone who wants to comment here sign up for ;):

      Ingolf, you noted that you'd rather I not try to read between the lines to deduce that proprietary relicensing and/or “Open Core” is where Canonical, Ltd.'s marketing is leading. I disagree; I think it's useful to consider what seems a likely end-outcome here. My primary goal is to draw attention to it now in hopes of preventing it from happening. My best possible outcome is that I get proved wrong, and Canonical makes a promise-back in their assignment and/or CLA.

      Meanwhile, I don't think they can go “Open Core” and/or proprietary relicensing for all of Ubuntu, as you are saying. They aren't sole copyright holder in most of Ubuntu. The places where they can pursue these options is in Launchpad, pbuilder, upstart, and the other projects that require CLA and/or assignment.

      I don't know for sure that they'll do this, as I say above. I can deduce no other explanation. As I keep saying, if someone else has another possible explanation for Canonical, Ltd.'s behavior that I list above, I'm happy to link to it here. I can't see any other reason; they'd surely by now just made an FSF-style promise-back in their CLA if they didn't want to hold proprietarization as a possibility.

    Posted on Sunday 17 October 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-10-04: Conservancy's First Blog Post

    [ Crossposted from Conservancy's blog. ]

    As can be seen in today's announcement, today is my first day as full-time Executive Director at the Software Freedom Conservancy. For four years, I have worked part-time on nights, weekends, and lunch times to keep Conservancy running and to implement and administer the services that Conservancy provides to its member projects. It's actual quite a relief to now have full-time attention available to carry out this important work.

    From the start, one of my goals with Conservancy has been to run the non-profit organization as transparently as possible. At times, I've found that when time is limited, keeping the public informed about all your work is often the first item to fall too far down on the action item list. Now that Conservancy is my primary, daily focus, I hope to increase its transparency as much as possible.

    Specifically, I plan to keep a regular blog about activities of the Conservancy. I've found that a public blog is a particular convenient way to report to the public in a non-onerous way about the activities of an organization. Indeed, we usually ask those developers whose work is funded through Conservancy to keep a blog about their activities, so that the project's community and the public at large can get regular updates about the work. I should hold myself to no less a standard!

    I encourage everyone to subscribe to the full Conservancy site RSS feed, where you'll receive both news items and blog posts from the Conservancy. There are also separate feeds available for just news and just blog posts. Also, if you're a subscriber to my personal blog, I will cross-post these blog posts there, although my posts on Conservancy's blog will certainly be a proper subset of my entire personal blog.

    Posted on Monday 04 October 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-09-11: Two Thank-Yous

    I'm well known for being critical when necessary about what happens in the software freedom community, but occasionally, there's nothing to do but thank someone, particularly when they've done something I asked for. :)

    First, I'd like to thank Matthew Garrett for engaging in some GPL enforcement (as covered on He's taking an interesting tack of filing a complaint with US Customs. I've thought about this method in the past, but never really felt I wanted to go that route (mainly because I'm more familiar with the traditional GPL enforcement processes). However, it's really important that we try lots of different strategies for GPL enforcement; the path to success is often many methods in parallel. It looks like Matthew already got the attention of the violator. In the end, every GPL enforcement strategy is primarily to get the violator's attention so they take the issue seriously and come into compliance with the license.

    I've written before about how GPL enforcement can be a lonely place, and when I see someone get serious about doing some — as Matthew has in the last year or so — it makes GPL enforcement a lot less lonely. I still think I can count on my hands all the people active regularly in GPL enforcement efforts, but I am glad to see that's changing. The license stands for a principle, and we should defend it, despite the great length the corporate powers in the software freedom world go to in trying to stop GPL enforcement.

    Secondly, I need to thank my colleague Chris DiBona. Two years ago, I gave him quite a hard time that Google prohibited hosting of AGPLv3'd projects on its FLOSS Project Hosting site. The interesting part of our debate was that Chris argued that license proliferation was the reason to prohibit AGPLv3. I argued at the time that Google simply opposed AGPLv3 because many parts of Google's business model rely on the fact that the GPL behaves in practice somewhat like permissive licenses when deployed in a web services environment.

    Honestly, I never had definitive proof at Google's “real reasons” for holding the policy it did for two years, but it doesn't matter now, because yesterday Chris announced that Google Code Hosting now accepts AGPLv3'd projects0. I really appreciate Chris' friendly words on AGPLv3, noting that he didn't like turning away projects under licenses that serve a truly new function, like the AGPL.

    Google will now accept projects under any license that is on OSI's approved list. I think this is a reasonable outcome. I firmly believe that acceptable license lists must be the purview of not-for-profit organizations, not for-profit ones. Personally, I tend to avoid and distrust any license that fails to appear on both OSI's list and the FSF Free Software License List. While I obviously favor the FSF list myself (having helped originate it), I generally want to see a license on both lists before I'm ready to say for sure there are no worries about it.

    There are two other entities that maintain license lists, namely the Debian Project and Red Hat's Fedora Project. I wouldn't say that I find Debian's list definitive, mainly because, despite Debian's generally democratic slant, the ftp-masters hold a bit too much power in interpreting the DFSG.

    As for Fedora, that's ultimately a project controlled by a for-profit corporation (Red Hat), and therefore I have some trepidation about trusting their list, just as I had concerns that Google attempted to set licensing policy by defining an acceptable license list. As it stands at the moment, I trust Fedora's list because I know that Spot and Fontana currently have the ultimate say on what does or does not go onto Fedora's list. Nevertheless, Red Hat is ultimately in control of Fedora, so I think its license list can't be relied on indefinitely (e.g., in case Spot and/or Fontana ever leave Red Hat at some point.)

    Anyway, I think the best outcome for the community is for the logical conjunction of the OSI's list and the FSF's list to be considered the accepted list of licenses. While I often disagree with the OSI, I think it's in the best interest of the community to require that two distinct non-profits with different missions both approve a license before it's considered acceptable. (I suppose I'd have a different view if OSI had not accepted the AGPLv3, though. ;)

    0I must point out that Chris has an error in his blog post: namely, FSF's Code hosting site, Savannah accepts not just GPL'd projects, but any project that is listed as “GPL-Compatible” on FSF's Free Software License List.

    Posted on Saturday 11 September 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-08-27: The Saga of Sun RPC

    I first became aware of the Sun RPC license in mid-2001, but my email archives from the time indicate the issue predated my involvement with it; it'd been an issue of consideration since 1994. I later had my first large email thread “free-for-all” on the issue in April 2002, which was the first of too many that I'd have before it was all done. In December 2002, the Debian bug was filed, and then it became a very public debate. Late last week, it was finally resolved. It now ranks as the longest standing Free Software licensing problem of my career. A cast of dozens deserve credit for getting it resolved.

    Tom “spot” Callaway does a good job summarizing the recent occurrences on this issue (and by recent, I mean since 2005 — it's been going long enough that five years ago is “recent”), and its final resolution. So, I won't cover that recent history, but I encourage people to read Spot's summary. Simon Phipps, who worked on this issue during his time as the Chief Open Source Officer of Sun, also wrote about his work on the issue. For my part, I'll try to cover the “middle” part of the story from 2001-2005.

    So, the funny thing about this license is everyone knew it was Sun's intention to make it Free Software. The code is so old, it dates back to a time when the drafting of Free Software licenses weren't well understood (old-schoolers will, for example, remember the annoying advertising clause in early BSD licenses). Thus, by our modern standards, the Sun RPC license does appear on its face as trivially non-Free, but in its historical context, the intent was actually clear, in my opinion.

    Nevertheless, by 2002, we knew how to look at licenses objectively and critically, and it was clear to many people that the license had problems. Competing legal theories existed, but the concerns of Debian were enough to get everyone moving toward a solution.

    For my part, I checked in regularly during 2002-2004 with Danese Cooper (who was, effectively, Simon Phipps' predecessor at Sun), until I was practically begging her to pay attention to the issue. While I could frequently get verbal assurances from Danese and other Sun officials that it was their clear intention that glibc be permitted to include the code under the LGPL, I could never get something in writing. I had a hundred other things to worry about, and eventually, I stopped worrying about it. I remember thinking at the time: well, I've notes on all these calls and discussions I've had with Sun people about the license. Worst case scenario: I'll have to testify to this when Sun sues some Free Software project, and there will be a good estoppel defense.

    Meanwhile, around early 2004, my friend and colleague at FSF, David “Novalis” Turner took up the cause in earnest. I think he spent a year or two as I did: desperately trying to get others to pay attention and solve the problem. Eventually, he left FSF for other work, and others took up the cause, including Brett Smith (who took over Novalis' FSF job), and, by that time, Spot was also paying attention to this. Both Brett and Spot worked hard to get Simon Phipps attention on it, which finally happened. But around then began that long waiting period while Oracle was preparing to buy Sun. It stopped almost anything anyone wanted to get done with Sun, so everyone just waited (again). It was around that time that I decided I was pretty sure I never wanted to hear the phrase: “Sun RPC license” again in my life.

    Meanwhile, Richard Fontana had gone to work for Red Hat, and his self-proclaimed pathological obsession with Free Software (which can only be rivaled by my own) led him to begin discussing the Sun RPC issue again. He and Spot were also doing their best negotiating with Oracle to get it fixed. They took us the last miles of this marathon, and now the job is done.

    I admit that I feel of some shame that, in recent years, I've had such fatigue about this issue — a simple one that should've been solved a decade and a half ago — that, since 2008, I've done nothing but kibitz about the issue when people complained. I also didn't believe that a company as disturbing and anti-Free-Software as Oracle could ever be convinced to change a license to be more FaiF. Spot and Fontana proved me wrong, and I'm glad.

    Thanks to everyone in this great cast of characters that made this ultimately beneficial production of licensing theater possible. I've been honored that I shared the stage in the first few acts, and sorry that I hid backstage for the last few. It was right to keep working on it until the job was done. As Fontana said: Estoppel may be relevant but never enough; software freedom principle[s] should matter as much as legal risk. … [the] standard for FaiF can't simply be ‘good defense to copyright infringement likely’. Thanks to everyone; I'm so glad I no longer have to wait in fear of a subpoena from Oracle in a lawsuit claiming infringement of their Sun RPC copyrights.

    Posted on Friday 27 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-08-16: Considerations For FLOSS Hackers About Oracle vs. Google

    Many have already opined about the Oracle v. Google lawsuit filed last week. As you might expect, I'm not that worried about what company sues what company for some heap of cash; those sort of for-profit wranglings just aren't what concerns me. Rather, I'm focused on what this event means for the future of software freedom. And, I think even at this early stage of the lawsuit, there are already a few lessons for the Free Software community to learn.

    Avoid Single-Company-Controlled Language Infrastructure

    Fourteen months ago, before the Oracle purchase of Sun, I wrote about the specific danger of language infrastructure developed by a single for-profit patent-holding entity (when such infrastructure is less than 20 years old). In that blog post, I wrote:

    [Some] might argue that with all those patents consolidated [in a single company], patent trolls will have a tough time acquiring patents and attacking FaiF implementations. However, while this can sometimes be temporarily true, one cannot rely on this safety. Java, for example, is in a precarious situation now. Oracle is not a friend to Free Software, and soon will hold all Sun's Java patents — a looming threat to FaiF Java implementations … [A]n Oracle attack on FaiF Java is a possibility.

    I'm sorry that I was right about this, but we should now finally learn the lesson: languages like Java and C# are dangerous. Single companies developed them, and there are live, unexpired patents that can easily be used in a group to attack FaiF implementations. Of course, that doesn't mean other language infrastructures are completely safe from patents, but I believe there is greater relative risk of a system with patent consolidation at a single company.

    It also bears repeating the point I made on Linux Outlaws last July: this doesn't mean the Free Software community shouldn't have FaiF implementations of all languages. In fact, we absolutely should, because we do want developers who are familiar with those languages to bring their software over to GNU/Linux and other Free Software systems.

    However, this lawsuit proves that choosing some languages for newly written Free Software is dangerous and should be avoided, especially when there are safer choices like C, C++, Python, and Perl0. (See my blog post from last year for more on this subject.)

    Never Let Your Company File for Patents on Your Work
    James Gosling is usually pretty cryptic in his non-technical writing, but I think if you read carefully, it seems to me that Gosling regrets that Oracle now holds his patents on Java. I know developers get nice bonuses if they let their company apply for patents on their work. I also know there's pressure in most large companies to get more patents. We, as developers, must simply refuse this. We invent this stuff, not the suits and the lawyers who want to exploit our work for larger and larger profits. As a community of developers and computer scientists, we must simply refuse to ever let someone patent our work. In a phrase: just say no.

    Even if you like your company today, you never know who will own those software patents later. I'm sure James Gosling originally never considered the idea that a company as revolting as Oracle would have control of everything he's invented for the last two decades. But they do, and there's nothing Gosling can do about what's done with his work and “inventions”. Learn from this example; don't let your company patent your work. Instead, publish online to establish prior art as quickly as possible.

    Google Is Not Merely a Pure Free Software Distributor

    Google has worked hard to cast themselves as innocent, Free-Software-producing victims. That's good PR because it's true, but it's also not telling the whole truth. Google worked hard to make sure Android was completely Apache-2.0 (or even more permissively) licensed (except for Linux, of course). There was already plenty Java stuff available under the GPL that Google could have used. Sadly, Google was so allergic to GPL for Android/Linux that they even avoided LGPL'd components like uClibc and glibc (in favor of their own permissively-licensed C library based on a BSD version).

    Google's reason for permissive-only licensing for “everything but the kernel” was likely a classic “adoption is more important than software freedom” scenario. Google wants Android/Linux in as many phones as possible, and wants to eliminate any “barrier” to such adoption, even if such a “barrier” would defend software freedom.

    This new lawsuit would be much more interesting if Google had chosen GPL and/or LGPL for Android. In fact, if I fantasize about being empowered to design a binding, non-financial settlement to the lawsuit, the first item on my list would be a relicense of all future Android/Linux systems under GPL and/or LGPL. (Basically, Google would license only enough under LGPL to allow proprietary applications, and license all the rest as GPL, thus yielding the same licensing consequences as GNU/Linux and GNOME). Then, I'd have Oracle explicitly license all its patents under GPL and/or LGPL compatible licenses that would permit Android/Linux to continue unencumbered, but under copyleft. (BTW, Mark Wielaard has a blog post that discussed more about the issue of GPL'd/LGPL'd Java implementations and how they relate to this lawsuit.)

    I realize that's never going to happen, but it's an interesting thought experiment. I am of course opposed to software patents, and I certainly oppose companies like Oracle that produce almost all proprietary software. However, I can at least understand the logic of Oracle not wanting its software patents exercised in proprietary software. I think a trade off, whereby all software patents are licensed freely and royalty-free only for use in copylefted software is a reasonable compromise. OTOH, knowing Oracle, they could easily have plans to attack copyleft implementations too. Thus, we must assume they won't accept this reasonable compromise of “royalty-free licensing for copyleft only”. That brings me to my next point of FaiF hackers' concern about this lawsuit.

    Never Trust a Mere Patent Promise; Demand Real Patent Licenses

    I wrote after Bilski that patent promises just aren't enough, and this lawsuit is an example of why. I presume that Oracle's lawyers have looked carefully as the various promises and assurances that Sun made about its Java patents and have concluded Oracle has good arguments for why those promises don't apply to Android. I have no idea what those arguments are, but rarely do lawyers file a lawsuit without very good arguments already prepared. I hope Oracle's lawyers' arguments are wrong and they lose. But, the fact that Oracle even has a credible argument that Android/Linux doesn't already have a patent license shows again that patent promises are just not enough.

    Miguel de Icaza used this opportunity to point out how the Microsoft C# promises are “better” by comparison, in his opinion. But, Brett Smith at FSF already found huge holes in those Microsoft promises that haven't been fixed. In fact, any company making these promises always tries to hide as much nasty stuff as it can, to convince the users that they are safe from patent aggression when they really aren't. That's why the Free Software community must demand simple, clear, and permanent royalty-free patent licenses for all patents any company might hold. We should accept nothing less. As mentioned above, those licenses could perhaps require that a certain Free Software copyright license, such as GPLv3-or-later, be used for any software that gets the advantage of the license. (i.e., I can certainly understand if companies don't want to accidentally grant such patent licenses to their proprietary software competitors).

    Indeed, it's particularly important that the licenses cover all patents and those possibly exercised in future improvements in the software. This lawsuit has clearly shown that even if patent pools exist for some subsets of patents for some subsets of Free Software, patent holders will either use other patents for aggression, or they'll assert patents in the patent pools against Free Software that's not part of the pool. In essence, we must assume that any for-profit company will become a patent troll eventually (they always do), and therefore any cross-licensing pools that don't include every patent possible for any possible Free Software will always be inadequate. So, the answer is simple: trust no software-patent-holding company unless they give an explicit GPLv3-compatible license for all their patents.

    We Must End Software Patents

    The failure of the Bilski case to end software patents in the USA means much work lies ahead to end software patents. The End Software Patents Wiki has some good stuff about this case as well as lots of other information related to software patents. There are now heavily funded for-profit corporate efforts that seek to convince the Free Software community that patent reform is enough. But, it's not! For example, if you see presenters at FLOSS conferences claiming to have solutions to patent problems, ask them if their organization opposes all software patents, and ask them if their funders license all their patents freely for GPLv3-or-later software implementations. If you hear the wrong answers, then their motives and mission are suspect.

    Finally, I'd like to note that, in some sense, these patent battles help Free Software, because it may actually teach companies that the expense of having software patents is not worth the risk of patent lawsuits. It's possible we've reached a moment in history where it'd be better if the Software Patent Cold War becomes a full Software Patent Nuclear War. Software freedom can survive that “nuclear winter”. I sometimes think that in the Free Software community, we may find ourselves left with just two choices: fifty more years of Patent Cold War (with lots of skirmishes like this one), or ten years of full-on patent war (after which companies would beg Congress to end software patents). Both outcomes are horrible until they're resolved, but the latter would reach resolution quicker. I often wonder which one is the better long term for software freedom.

    But, no matter what happens next, the necessary position is: all software patents are bad for software freedom. Any entity that supports anything short of full abolition of software patents is working against software freedom.

    0I originally had PHP listed here, but jwildeboer argued that Zend Technologies, Ltd. might be a problem for PHP in the same way Oracle is for Java and Microsoft for C#. It's true that Zend is a software patent holder and was involved in the development of later PHP versions. I don't think the single-company-controlled software patent risks with PHP are akin to those of Java and C#, since Zend Technologies isn't the only entity involved in PHP's development, but certainly the other languages listed are likely preferable to PHP.

    Posted on Monday 16 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-08-13: GNOME Copyright Assignment Policy

    Vincent Untz announced and blogged today about the GNOME Copyright Assignment Policy and a longer guidelines document about the GNOME policy. I want to thank both Vincent and Michael Meeks for their work with me on this policy.

    As I noted in my blog last week, GUADEC really reminded me how great the GNOME community is. Therefore, it's with great pride that I was able to assist on this important piece of policy for the GNOME community.

    There are a lot of forces in the corporate side of Free Software right now that are aggressively trying to convince copylefted projects to begin assigning copyright of their code (or otherwise agree to CLAs) to corporations without any promises that the code will remain Free Software. We must resist this pressure: copyleft, when used correctly, is the force that keeps equality in the community, as I've written about before.

    I thank the GNOME Board of Directors for entrusting us to write the policy, and am glad they have adopted it.

    Posted on Friday 13 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-08-10: May They Make Me Superfluous

    The Linux Foundation announced today their own FLOSS license compliance program, which included the launch of a few software tools under a modified BSD license. They also have offered some training courses for those that want to learn how to comply.

    If this Linux Foundation (LF) program is successful, I may get something I've wished for since the first enforcement I ever worked on back in late 1998: I'd like to never do GPL enforcement again. I admit I talk a lot about GPL enforcement. It's indeed been a major center of my work for twelve years, but I can't say I've ever really liked doing it.

    By contrast, I have been hoping for years that someone would eventually come along and “put me out of the enforcement business”. Someday, I dream of opening up the <> folder and having no new violation reports (BTW, those dreams usually become real-life nightmares, as I typically get two new violations reports each week). I also wish for the day that I don't have a backlogged queue of 200 or more GPL violations where no source nor offer for source has been provided. I hate that it takes so much time to resolve violations because of the sheer magnitude that exist.

    I got into GPL enforcement so heavily, frankly, because so few others were doing it. To this day, there are basically three groups even bothering to enforce GPL on behalf of the community: Conservancy (with enforcement efforts led by me), FSF (with enforcement efforts led by Brett Smith), and (with enforcement efforts led by Harald Welte). Generally, GPL enforcement has been a relatively lonely world for a long time, mainly because it's boring, tedious and patience-trying work that only the most dedicated (masochistic?) want to spend their time doing.

    There are a dozen of very important software-freedom-advancing activities that I'd rather spend my time doing. But as long as people don't respect the freedom of software users and ignore the important protections of copyleft, I have to continue doing GPL enforcement. Any effort like LF's is very welcome, provided that it reduces the number of violations.

    Of course, LF (as GPL educators) and Brett, Harald, and I (as GPL enforcers) will share the biggest obstacle: getting communication going with the actual violators. Fact is, people who know the LF exists or have heard of the GPL are likely to already be in compliance. When I find a new violation, it's nearly always someone who doesn't even know what's going on, and often doesn't even realize what their engineering team put into their firmware. If LF can reach these companies before they end up as a violation report emailed to me, I'll be as glad as can be. But it's a tall order.

    I do have a few minor criticisms of LF's program. First, I believe the directory of FLOSS Compliance Officers should be made publicly available. I think FLOSS Compliance Officers at companies should make themselves publicly known in the software freedom community so they can be contacted directly. As LF currently has it set up, you have to make a request of the LF to put you in touch with a company's compliance officer.

    Second, I admit I'd have liked to have been actively engaged in LF's process of forming this program. But, I presume that they wanted as much distance as possible from the world's most prolific GPL enforcer, and I can understand that. (I suppose there's a good cop/bad cop metaphor you could make here, but I don't like to think of myself as the GPL police.) I did offer to help LF on this back in April when they announced it at the Linux Collaboration Summit, but they haven't been in touch. Nevertheless, I'll hopefully meet with LF folks on Thursday at LinuxCon about their program. Also, I was invited a few months ago by Martin Michlmayr to join one subset of the project, the SPDX working group and I've been giving it time whenever I can.

    But, as I said, those are only minor complaints. The program as a whole looks like it might do some good. I hope companies take advantage of it, and more importantly, I hope LF can reach out to the companies who don't know their name yet but have BusyBox/Linux embedded in their products.

    Please, LF, help free me from the grind of GPL enforcement work. I remain committed to enforcing GPL until there are no violations left, but if LF can actually bring about an end to GPL violations sooner rather than later, I'll be much obliged. In a year, if I have an empty queue of GPL violations, I'll call LF's program a unmitigated success and gladly move on to other urgent work to advance software freedom.

    Posted on Tuesday 10 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-08-09: “Have To” Is a Relative Phrase

    I often hear it. I have to use proprietary software, people say. But usually, that's a justification and an excuse. Saying have to implies that they've been compelled by some external force to do it.

    It begs the question: Who's doing the forcing? I don't deny there might be occasions with a certain amount of force. Imagine if you're unemployed, and you've spent months looking for a job. You finally get one, but it generally doesn't have anything to do with software. After working a few weeks, your boss says you have to use a Microsoft Windows computer. Your choices are: use the software or be fired and spend months again looking for a job. In that case, if you told me you have to use proprietary software, I'd easily agree.

    But, imagine people who just have something they want to do, completely unrelated to their job, that is made convenient with proprietary software. In that case, there is no have to. One doesn't have to do a side project. So, it's a choice. The right phrase is wanted to, not have to.

    Saying that you're forced to do something when you really aren't is a failure to take responsibility for your actions. I generally don't think users of proprietary software are primarily to blame for the challenges of software freedom — nearly all the blame lies with those who write, market, and distribute proprietary software. However, I think that software users should be clear about why they are using the software. It's quite rare for someone to be compelled under threat of economic (or other) harm to use proprietary software. Therefore, only rarely is it justifiable to say you have to use proprietary software. In most cases, saying so is just making an excuse.

    As for being forced to develop proprietary software, I think it's even rarer yet. Back in 1991 when I first read the GNU Manifesto, I was moved by RMS' words about the issue:

    “Won't programmers starve?”

    I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else.

    But that is the wrong answer because it accepts the questioner's implicit assumption: that without ownership of software, programmers cannot possibly be paid a cent. Supposedly it is all or nothing.

    Well, even if it is all or nothing, RMS was actually right about this: we can do something else. By the mid 1990s, these words had inspired me to make a lifelong plan to make sure I'd never have to write or support proprietary software again. Despite being trained primarily as a computer scientist, I've spent much time building contingency plans to make sure I wouldn't be left with proprietary software support or development as my only marketable skill.

    During the 1990s, it wasn't clear that software freedom would have any success at all. It was a fringe activity; Cygnus was roughly the only for-profit company able to employ people to write Free Software. As such, I of course started learning the GCC codebase, figuring that I'd maybe someday get a job at Cygnus. I also started training as an American Sign Language translator, so I'd have a fallback career if I didn't get a job at Cygnus. Later, I learned how to play poker really well, figuring that in a worst case, I could end up as a professional poker player permanently.

    As it turned out, I've never had to rely fully on these fallback plans, primarily because I was hired by the FSF in 1999. For the last eleven years, I have been able to ensure that I've never had a job that required that I use, support, or write proprietary software and I've worked only on activities that directly advanced software freedom. I admit I was often afraid that someday I might be unable to find a job, and I'd have to support, use or write proprietary software again. Yet, despite that fear, since 1997, I've never even been close to that.

    So, honestly, I just don't believe those who say they have to use proprietary software. Almost always, they chose to use it, because it's more convenient than the other things they'd have to do to avoid it. Or, perhaps, they'd rather write or use proprietary software than write or use no software at all, even when avoiding software entirely was a viable option.

    In summary, I want to be clear that I don't judge people who use proprietary software. I realize not everyone wants to live their life as I do — with cascading fallback plans to avoid using, writing or supporting proprietary software. I nevertheless think it's disingenuous to say you have to use, support or develop proprietary software. It's a choice, and every year that goes by, the choice gets easier, so the statement sounds more like an excuse all the time.

    Posted on Monday 09 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-08-05: GUADEC 2010: Rate Conferences by Inspiration Value

    Conferences are often ephemeral. I've been going to FLOSS conferences since before there were conferences specifically for the topic. In the 1990s, I'd started attending various USENIX conferences. Many of my career successes can be traced back to attending those conferences and meeting key leaders in the FLOSS world. While I know this is true generally, I can't really recall, without reviewing notes from specific conferences, what happened at them, and how specifically it helped me personally or FLOSS in general. I know they're important to me and to software freedom, but it's tough to connect the dots perfectly without looking in detail at what happened when.

    Indeed, for most of us, after decades, conferences start to run together. At GUADEC this year, I had at least two conversations of the nature: What city was that? What conference was that? Wait, what year was that?. And that was just discussions about past GUADECs specifically, let alone other events!

    For my part, after checking my records, I discovered that I hadn't been to a GUADEC since 2003. I've served as FSF's representative on the GNOME Advisory Board straight through from 2001 until today, but nevertheless I hadn't been able to attend GUADECs from 2004-2009. Thus, the 2010 GUADEC was somewhat of a reintroduction for me to the in-person GNOME community.

    With fresh eyes, what I saw had great impact on me. GNOME seems to be a vibrant, healthy community, with many contributors and incredible diversity in both for-profit and volunteer contributions. GNOME's growth and project diversity has greatly exceeded what I would have expected to see between 2004 and 2010.

    It's not often I go to a conference and am jealous that I can't be more engaged as a developer. I readily admit that I haven't coded regularly in more than a decade (and I often long to do it again). But, I usually talk myself out of it when I remember the difficultly of getting involved and in shepherding work upstream. It's a non-trivial job, and some don't even bother. The challenges are usually enough to keep the enticement at bay.

    Yet, I left GUADEC 2010 and couldn't see a downside in getting involved. I found myself on the flight back wishing I could do more, thinking through the projects I saw and wondering how I might be a coder again. There must be some time on the weekends somewhere, I thought, and while I'm not a GUI programmer, there's plenty of system stuff in GNOME like dbus and systemd; surely I can contribute there.

    Fact is, I've got too many other FLOSS-world responsibilities and I must admit I probably won't contribute code, despite wanting to. What's amazing, though, is that everything about GUADEC made me want to get more involved and there appeared no downside in doing so. There's something special about a conference (and a community) that can inspire that feeling in a hardened, decade-long conference attendee. I interact with a lot of FLOSS communities, and GNOME is probably the most welcoming of all.

    The rest of this post is a random bullet list of cool things that happened at GUADEC that I witnessed/heard/thought about:

    • There was a lot of debate and concern about the change in the GNOME 3 release schedule. I was impressed at the community unity on this topic when I heard a developer say in the hall: The change in GNOME 3 schedule is bad for me, but it's clearly the right thing for GNOME, so I support it. That's representative of the “all for one” and selfless attitude you'll find in the GNOME community.
    • Dave Neary presented a very interesting study on GNOME code contributions, which he was convinced to release under CC-By-SA. The study has caused some rancor in the community about who does or does not contribute to GNOME upstream, but generally speaking, I'm glad the data is out there, and I'm glad Dave's released it under a license that allows people to build on the work and reproduce and/or verify the results. (Dave's also assured me he'll release the tools and config files and all other materials under FaiF licenses as well; I'll put a link here when he has one.) Thing is, the most important and wonderful datum from Dave's study is that a plurality of GNOME contribution comes from volunteers: a full 23%! I think every FLOSS project needs a plurality of volunteer contribution to truly be healthy, and it seems GNOME has it.
    • My talk on GPLv3 was reasonably well received, notwithstanding some friendly kibitzing from Michael Meeks. There had been push back in previous discussions in the GNOME community about GPLv3. It seems now, however, that developers are interested in the license. It's not my goal to force anyone to switch, but I hope that my talk and my participation in this recent LGPLv3 thread on desktop-list might help to encourage a slow-but-sure migration to GPLv3-or-later (for applications) and (GPLv2|LGPLv3-or-later) (for platform libraries) in GNOME. If folks have questions about the idea, I'm always happy to discuss them.
    • I enjoyed rooming with Brad Taylor. We did wonder, though, if the GNOME Travel Committee assigned us rooms by similar first names. (In fact, I was so focused that on the fact that we shared the same first name, I previously had typed Brad's last name wrong here!) I liked hearing about his TomBoy online project, Snowy. I'm obviously delighted to see adoption of AGPLv3, the license I helped create. I've promised Brad that I'll try to see if I can convince the org-mode community to use Snowy for its online storage as well.
    • Owen Taylor demoed and spoke about GNOME Shell 3.0. I don't use GUIs much myself, but I can see how GUI-loving users will really enjoy this excellent work.
    • I met Lennart Poettering and discussed with him in detail the systemd project. While I can see how this could be construed as a Canonical/Red Hat fight over the future of what's used for system startup, I still was impressed with Lennart's approach technically, and find it much healthier that his community isn't requiring copyright assignment.
    • Emmanuele Bassi's talk on Clutter was inspiring, as he delivered heartfelt slide indicating that he'd overcome the copyright assignment requirements and assignment is no longer required by Intel for Clutter upstream contributions. I like to believe that Vincent Untz's, Michael Meeks' and my work on the (yet to be ratified) GNOME Copyright Assignment Policy was a help to Emmanuele's efforts in this regard. However, it sounds to me like the outcome was primarily due to a lot of personal effort on Emmanuele's part internally to get Intel to DTRT. I thank him for this effort and congratulate him on that success.
    • It was great to finally meet Fabian Scherschel in person. He kindly brought me some gifts from Germany and I brought him some gifts from the USA (we prearranged it; I guess that's the “outlaw” version of gifts). Fab also got some good interviews for the Linux Outlaws podcast that he does with Dan Lynch. It seems that podcast has been heavily linked to in the GNOME community, which is really good for Dan and Fab and for GNOME, I think.
    Sponsored by the GNOME Foundation!

    That's about all the random thoughts and observations I have from GUADEC. The conference was excellent, and I think I simply must readd it to my “must attend each year” list.

    Finally, I want to thank the GNOME Foundation for sponsoring my travel costs. It allowed me to take some vacation time from my day job to attend and participate in GUADEC.

    Posted on Thursday 05 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-08-03: More GPL Enforcement Progress

    LWN is reporting a GPL enforcement story that I learned about during last week while at GUADEC (excellent conference, BTW, blog post on that later this week). I wasn't sure if it was really of interest to everyone, but since it's hit the press, I figured I'd write a brief post to mention it.

    As many probably know, I'm president of the Software Freedom Conservancy, which is the non-profit organizational home of the BusyBox project. As part of my role at Conservancy, I help BusyBox in its GPL enforcement efforts. Specifically and currently, the SFLC is representing Conservancy in litigation against a number of defendants who have violated the GPL and were initially unresponsive to Conservancy's attempts to bring them into compliance with the terms of the license.

    A few months ago, one of those defendants, Westinghouse Digital Electronics, LLC, stopped responding to issues regarding the lawsuit. On Conservancy's behalf, SFLC asked the judge to issue a default judgment against them. A “default” means what it looks like: Conservancy asked to “win by default” since Westinghouse stopped showing up. And, last week, Conservancy was granted a default judgment against Westinghouse, which included an injunction to stop their GPL-non-compliant distributions of BusyBox.

    “Injunctive Relief”, as the lawyers call it, is a really important thing for GPL enforcement. Obviously our primary goal is full compliance with the GPL, which means giving the complete and corresponding source code (C&CS, as I tend to abbreviate it) to all those who received binary distributions of the software. Unfortunately, in some cases (for example, when a company simply won't cooperate in the process despite many efforts to convince them to do so), the only option is to stop further distribution of the violating software. As many parts of the GPL itself point out, it's better to not have software distributed at all, if it's only being distributed as (de facto) proprietary software.

    I'm really glad that a judge has agreed that the GPL is important enough a license to warrant an injunction on out-of-compliance distribution. This is a major step forward in GPL enforcement in the USA. (Please note that Harald Welte had past similar successes in Germany, and deserves credit and kudos for getting this done the first time in the world. This success follows in his footsteps.)

    Posted on Tuesday 03 August 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-07-15: At Least Motorola Admits It

    I've written before about the software freedom issues inherent with Android/Linux. Summarized shortly: the software freedom community is fortunate that Google released so much code under Free Software licenses, but since most of the code in the system is Apache-2.0 licensed, we're going to see a lot of proprietarized, non-user-upgradable versions. In fact, there's no Android/Linux system that's fully Free Software yet. (That's why Aaron Williamson and I try to keep the Replicant project going. We've focused on the HTC Dream and the NexusOne, since they are the mobile devices closest to working with only Free Software installed, and because they allow the users to put their own firmware on the device.)

    I was therefore intrigued to discover last night (via mtrausch) a February blog post by Lori Fraleigh of Motorola, wherein Fraleigh clarifies Motorola's opposition to software freedom for its Android/Linux users:

    We [Motorola] understand there is a community of developers interested in … Android system development … For these developers, we highly recommend obtaining either a Google ADP1 developer phone or a Nexus One … At this time, Motorola Android-based handsets are intended for use by consumers.

    I appreciate the fact that Fraleigh and Motorola are honest in their disdain for software developers. Unlike Apple — who tries to hide how developer-unfriendly its mobile platform is — Motorola readily admits that they seek to leave developers as helpless as possible, refusing to share the necessary tools that developers need to upgrade devices and to improve themselves, their community, and their software. Companies like Motorola and Apple both seek to squelch the healthy hacker tendency to make technology better for everyone. Now that I've seen Fraleigh's old blog post, I can at least give Motorola credit for full honesty about these motives.

    I do, however, find the implication of Fraleigh's words revolting. People who buy the devices, in Motorola's view, don't deserve the right to improve their technology. By contrast, I believe that software freedom should be universal and that no one need be a “mere consumer” of technology. I believe that every technology user is a potential developer who might have something to contribute but obviously cannot if that user isn't given the tools to do so. Sadly, it seems, Motorola believes the general public has nothing useful to contribute, so the public shouldn't even be given the chance.

    But, this attitude is always true for proprietary software companies, so there are actually no revelations on that point. Of more interest is how Motorola was able to do this, given that Android/Linux (at least most of it) is Free Software.

    Motorola's ability to take these actions is a consequence of a few licensing issues. First, most of the Android system is under the Apache-2.0 license (or, in some cases, an even more permissive license). These licenses allow Motorola to make proprietary versions of what Google released and sell it without source code nor the ability for users to install modified versions. That license decision is lamentable (but expected, given Google's goals for Android).

    The even more lamentable licensing issue here is regarding Linux's license, the GPLv2. Specifically, Fraleigh's post claims:

    The use of open source software, such as the Linux kernel … in a consumer device does not require the handset running such software to be open for re-flashing. We comply with the licenses, including GPLv2.

    I should note that, other than Fraleigh's assertion quoted above, I have no knowledge one way or another if Motorola is compliant with GPLv2 on its Android/Linux phones. I don't own one, have no plans to buy one, and therefore I'm not in receipt of an offer for source regarding the devices. I've also received no reports from anyone regarding possible non-compliance. In fact, I'd love to confirm their compliance: please get in touch if you have a Motorola Android/Linux phone and attempted to install a newly compiled executable of Linux onto your phone.

    I'm specifically interested in the installation issue because GPLv2 requires that any binary distribution of Linux (such as one on telephone hardware) include both the source code itself and the scripts to control compilation and installation of the executable. So, if Motorola wrote any helper programs or other software that installs Linux onto the phones, then such software, under GPLv2, is a required part of the complete and corresponding source code of Linux and must be distributed to each buyer of a Motorola Android/Linux phone.

    If you're surprised by that last paragraph, you're probably not alone. I find that many are confused regarding this GPLv2 nuance. I believe the confusion stems from discussions during the GPLv3 process about this specific requirement. GPLv3 does indeed expand the requirement for the scripts to control compilation and installation of the executable into the concept of Installation Information. Furthermore, GPLv3's Installation Information is much more expansive than merely requiring helper software programs and the like. GPLv3's Installation Information includes any material, such as an authorization key, that is necessary for installation of a modified version onto the device.

    However, merely because GPLv3 expanded installation information requirements does not lessen GPLv2's requirement of such. In fact, in my reading of GPLv2 in comparison to GPLv3, the only effective difference between the two on this point relates to cryptographic device lock-down. I do admit that under GPLv2, if you give all the required installation scripts, you could still use cryptography to prevent those scripts from functioning without an authorization key. Some vendors do this, and that's precisely why GPLv3 is written the way that it is: we'd observed such lock-down occurring in the field, and identified that behavior as a bug in GPLv2 that is now closed with GPLv3.

    However, because of all that hype about GPLv3's new Installation Information definition, many simply forgot that the GPLv2 isn't silent on the issue. In other words, GPLv3's verbosity on the subject led people to minimize the important existing requirements of GPLv2 regarding installation information.

    As regular readers of this blog know, I've spent much of my time for the last 12 years doing GPL enforcement. Quite often, I must remind violators that GPLv2 does indeed require the scripts to control compilation and installation of the executable, and that candidate source code releases missing the scripts remain in violation of GPLv2. I sincerely hope that Android/Linux redistributors haven't forgotten this.

    I have one final and important point to make regarding Motorola's February statement: I've often mentioned that the mobile industry's opposition to GPLv3 and to user-upgradable devices is for their own reasons, and nothing to do with regulators or other outside entities preventing them from releasing such software. In their blog post, Motorola tells us quite clearly that the community of developers interested in … experimenting with Android system development and re-flashing phones … [should obtain] either a Google ADP1 developer phone or a Nexus One, both of which are intended for these purposes. In other words, Motorola tacitly admits that it's completely legal and reasonable for the community to obtain such telephones, and that, in fact, Google sells such devices. Motorola was not required to put lock-down restrictions in place, rather they made a choice to prohibit users in this way. On this point, Google chose to treat its users with respect, allowing them to install modified versions. Motorola, by contrast, chose to make Android/Linux as close to Apple's iPhone as they could get away with legally.

    So, the next time a mobile company tries to tell you that they just can't abide by GPLv3 because some third party (the FCC is their frequent scapegoat) prohibits them, you should call them on their FUD. Point out that Google sells phones on the open market that provide all Installation Information that GPLv3 might require. (In other words, even if Linux were GPLv3'd, Android/Linux on the NexusOne and HTC Dream would be a GPLv3-compliant distribution.) Meanwhile, at least one such company, Motorola, has admitted their solitary reason for avoiding GPLv3: the company just doesn't believe users deserve the right to install improved versions of their software. At least they admit their contempt for their customers.

    Update (same day): jwildeboer pointed me to a few posts in the custom ROM and jailbreaking communities about their concerns about Motorola's new offering, the Droid-X. Some commentors there point out that eventually, most phones get jailbroken or otherwise allow user control. However, the key point of the CrunchGear User Manifesto is a clear and good one: no company or person has the right to tell you that you may not do what you like with your own property. This is a point akin and perhaps essential to software freedom. It doesn't really matter if you can figure out to how to hack a device; what's important is that you not give your money to the company that prohibits such hacking. For goodness sake, people, why don't we all use ADP1's and NexusOne's and be done with this?

    Updated (2010-07-17): It appears that cryptographic lock down on the Droid-X is confirmed (thanks to rao for the link). I hope everyone will boycott all Motorola devices because of this, especially given that there are Android/Linux devices on the market that aren't locked down in this way.

    BTW, in Motorola's answer to Engadget on this, we see they are again subtly sending FUD that the lock-down is somehow legally required:

    Motorola's primary focus is the security of our end users and protection of their data, while also meeting carrier, partner and legal requirements.
    I agree the carriers and partners probably want such lock down, but I'd like to see their evidence that there is a legal restriction that requires that. They present none.

    Meanwhile, they also state that such cryptographic lock-down is the only way they know how to secure their devices:

    Checking for a valid software configuration is a common practice within the industry to protect the user against potential malicious software threats. Pity that Motorola engineers aren't as clueful as the Google and HTC engineers who designed the ADP1 and Nexus One.

    Posted on Thursday 15 July 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-07-07: Proprietary Software Licensing Produces No New Value In Society

    I sought out the quote below when Chris Dodd paraphrased it on Meet The Press on 25 April 2010. (I've been, BTW, slowly but surely working on this blog post since that date.) Dodd was quoting Frank Rich, who wrote the following, referring to the USA economic system (and its recent collapse):

    As many have said — though not many politicians in either party — something is fundamentally amiss in a financial culture that thrives on “products” that create nothing and produce nothing except new ways to make bigger bets and stack the deck in favor of the house. “At least in an actual casino, the damage is contained to gamblers,” wrote the financial journalist Roger Lowenstein in The Times Magazine last month. This catastrophe cost the economy eight million jobs.

    I was drawn to this quote for a few reasons. First, as a poker player, I've spend some time thinking about how “empty” the gambling industry is. Nothing is produced; no value for humans is created; it's just exchanging of money for things that don't actually exist. I've been considering that issue regularly since around 2001 (when I started playing poker seriously). I ultimately came to a conclusion not too different from Frank Rich's point: since there is a certain “entertainment value”, and since the damage is contained to those who chose to enter the casino, I'm not categorically against poker nor gambling in general, nor do I think they are immoral. However, I also don't believe gambling has any particular important value in society, either. In other words, I don't think people have an inalienable right to gamble, but I also don't think there is any moral reason to prohibit casinos.

    Meanwhile, I've also spent some time applying this idea of creating nothing and producing nothing to the proprietary software industry. Proprietary licenses, in many ways, are actually not all that different from these valueless financial transactions. Initially, there's no problem: someone writes software and is paid for it; that's the way it should be. Creation of new software is an activity that should absolutely be funded: it creates something new and valuable for others. However, proprietary licenses are designed specifically to allow a single act of programming generate new revenue over and over again. In this aspect, proprietary licensing is akin to selling financial derivatives: the actual valuable transaction is buried well below the non-existent financial construction above it.

    I admit that I'm not a student of economics. In fact, I rarely think of software in terms of economics, because, generally, I don't want economic decisions to drive my morality nor that of our society at large. As such, I don't approach this question with an academic economic slant, but rather, from personal economic experience. Specifically, I learned a simple concept about work when I was young: workers in our society get paid only for the hours that they work. To get paid, you have to do something new. You just can't sit around and have money magically appear in your bank account for hours you didn't work.

    I always approached software with this philosophy. I've often been paid for programming, but I've been paid directly for the hours I spent programming. I never even considered it reasonable to be paid again for programming I did in the past. How is that fair, just, or quite frankly, even necessary? If I get a job building a house, I can't get paid every day someone uses that house. Indeed, even if I built the house, I shouldn't get a royalty paid every time the house is resold to a new owner0. Why should software work any differently? Indeed, there's even an argument that software, since it's so much more trivial to copy than a house, should be available gratis to everyone once it's written the first time.

    I recently heard (for the first time) an old story about a well-known Open Source company (which no longer exists, in case you're wondering). As the company grew larger, the company's owners were annoyed that the company could only bill the clients for the hour they worked. The business was going well, and they even had more work than they could handle because of the unique expertise of their developers. The billable rates covered the cost of the developers' salaries plus a reasonable profit margin. Yet, the company executives wanted more; they wanted to make new money even when everyone was on vacation. In essence, having all the new, well-paid programming work in the world wasn't enough; they wanted the kinds of obscene profits that can only be made from proprietary licensing. Having learned this story, I'm pretty glad the company ceased to exist before they could implement their make money while everyone's on the beach plan. Indeed, the first order of business in implementing the company's new plan was, not surprisingly, developing some new from-scratch code not covered by GPL that could be proprietarized. I'm glad they never had time to execute on that plan.

    I'll just never be fully comfortable with the idea that workers should get money for work they already did. Work is only valuable if it produces something new that didn't exist in the world before the work started, or solves a problem that had yet to be solved. Proprietary licensing and financial bets on market derivatives have something troubling in common: they can make a profit for someone without requiring that someone to do any new work. Any time a business moves away from actually producing something new of value for a real human being, I'll always question whether the business remains legitimate.

    I've thus far ignored one key point in the quote that began this post: “At least in an actual casino, the damage is contained to gamblers”. Thus, for this “valueless work” idea to apply to proprietary licensing, I had to consider (a) whether or not the problem is sufficiently contained, and (b) whether software or not is akin to the mere entertainment activity, as gambling is.

    I've pointed out that I'm not opposed to the gambling industry, because the entertainment value exists and the damage is contained to people who want that particular entertainment. To avoid the stigma associated with gambling, I can also make a less politically charged example such as the local Chuck E. Cheese, a place I quite enjoyed as a child. One's parent or guardian goes to Chuck E. Cheese to pay for a child's entertainment, and there is some value in that. If someone had issue with Chuck E. Cheese's operation, it'd be easy to just ignore it and not take your children there, finding some other entertainment. So, the question is, does proprietary software work the same way, and is it therefore not too damaging?

    I think the excuse doesn't apply to proprietary software for two reasons. First, the damage is not sufficiently contained, particularly for widely used software. It is, for example, roughly impossible to get a job that doesn't require the employee to use some proprietary software. Imagine if we lived in a society where you weren't allowed to work for a living if you didn't agree to play Blackjack with a certain part of your weekly salary? Of course, this situation is not fully analogous, but the fundamental principle applies: software is ubiquitous enough in industrialized society that it's roughly impossible to avoid encountering it in daily life. Therefore, the proprietary software situation is not adequately contained, and is difficult for individuals to avoid.

    Second, software is not merely a diversion. Our society has changed enough that people cannot work effectively in the society without at least sometimes using software. Therefore, the “entertainment” part of the containment theory does not properly apply1, either. If citizens are de-facto required to use something to live productively, it must have different rules and control structures around it than wholly optional diversions.

    Thus, this line of reasoning gives me yet another reason to oppose proprietary software: proprietary licensing is simply a valueless transaction. It creates a burden on society and gives no benefit, other than a financial one to those granted the monopoly over that particular software program. Unfortunately, there nevertheless remain many who want that level of control, because one fact cannot be denied: the profits are larger.

    For example, Mårten Mikos recently argued in favor of these sorts of large profits. He claims that to benefit massively from Open Source (i.e., to get really rich), business models like “Open Core” are necessary. Mårten's argument, and indeed most pro-Open-Core arguments, rely on this following fundamental assumption: for FLOSS to be legitimate, it must allow for the same level of profits as proprietary software. This assumption, in my view, is faulty. It's always true that you can make bigger profits by ignoring morality. Factories can easily make more money by completely ignoring environmental issues; strip mining is always very profitable, after all. However, as a society, we've decided that the environment is worth protecting, so we have rules that do limit profit maximization because a more important goal is served.

    Software freedom is another principle of this type. While you can make a profit with community-respecting FLOSS business models (such as service, support and freely licensed custom modifications on contract), it's admittedly a smaller profit than can be made with Open Core and proprietary licensing. But that greater profit potential doesn't legitimatize such business models, just as it doesn't legitimize strip mining or gambling on financial derivatives.

    Update: Based on some feedback that I got, I felt it was important to make clear that I don't believe this argument alone can create a unified theory that shows why software freedom should be an inalienable right for all software users. This factor of lack of value that proprietary licensing brings to society is just another to consider in a more complete discussion about software freedom.

    Update: Glynn Moody wrote a blog post that quoted from this post extensively and made some interesting comments on it. There's some interesting discussion in the blog comments there on his site; perhaps because so many people hate that I only do blog comments on (which I do, BTW, because it's the only online forum I'm assured that I'll actually read and respond to.)

    0I realize that some argue that you can buy a house, then rent it to others, and evict them if they fail to pay. Some might argue further that owners of software should get this same rental power. The key difference, though, is that the house owner can't really make full use of the house when it's being rented. The owner's right to rent it to others, therefore, is centered around the idea that the owner loses some of their personal ability to use the house while the renters are present. This loss of use never happens with software.

    1You might be wondering, Ok, so if it's pure entertainment software, is it acceptable for it to be proprietary?. I have often said: if all published and deployed software in the world were guaranteed Free Software except for video games, I wouldn't work on the cause of software freedom anymore. Ultimately, I am not particularly concerned about the control structures in our culture that exist for pure entertainment. I suppose there's some line to be drawn between art/culture and pure entertainment/diversion, but considerations on differentiating control structures on that issue are beyond the scope of this blog post.

    Posted on Wednesday 07 July 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-06-30: Post-Bilski Steps for Anti-Software-Patent Advocates

    Lots of people are opining about the USA Supreme Court's ruling in the Bilski case. Yesterday, I participated in a oggcast with the folks at SFLC. In that oggcast, Dan Ravicher explained most of the legal details of Bilski; I could never cover them as well as he did, and I wouldn't even try.

    Anyway, as a non-lawyer worried about the policy questions, I'm pretty much only concerned about those forward-looking policy questions. However, to briefly look back at how our community responded to this Bilski situation over the last 18 months: it seems similar to what happened while the Eldred case was working its way to the Supreme Court. In the months preceding both Eldred and Bilski, there seemed to be a mass hypnosis that the Supreme Court would actually change copyright law (Eldred) or patent law (Bilski) to make it better for freedom of computer users.

    In both cases, that didn't happen. There was admittedly less of that giddy optimism before Bilski as there was before Eldred, but the ultimate outcome for computer users is roughly no different in both cases: as we were with Eldred, we're left back with the same policy situation we had before Bilski ever started making its way through the various courts. As near as I can tell from what I've learned, the entire “Bilski thing” appears to be a no-op. In short, as before, the Patent Office sometimes can and will deny applications that it determines are only abstract ideas, and the Supreme Court has now confirmed that the Patent Office can reject such an application if the Patent Office knows an abstract idea when it sees it. Nothing has changed regarding most patents that are granted every day, including those that read on software. Those of us that oppose software patents continue to believe that software algorithms are indeed merely abstract ideas and pure mathematics and shouldn't be patentable subject matter. The governmental powers still seems to disagree with us, or, at least, just won't comment on that question.

    Looking forward, my largest concern, from a policy perspective, is that the “patent reform” crowd, who claim to be the allies of the anti-software-patent folks, will use this decision to declare that the system works. Bilski's patent was ultimately denied, but on grounds that leave us no closer to abolishing software patents. Patent reformists will say: Well, invalid patents get denied, leaving space for the valid ones. Those valid ones, they will say, do and should include lots of patents that read on software. But only the really good ideas should be patented, they will insist.

    We must not yield to the patent reformists, particularly at a time like this. (BTW, be sure to read RMS' classic and still relevant essay, Patent Reform Is Not Enough, if you haven't already.)

    Since Bilski has given us no new tools for abolishing software patents, we must redouble efforts with tools we already have to mitigate the threat patents pose to software freedom. Here are a few suggestions, which I think are actually all implementable by the average developer, to will keep up the fight against software patents, or at least, mitigate their impact:

    • License your software using the AGPLv3, GPLv3, LGPLv3, or Apache-2.0. Among the copyleft licenses, AGPLv3 and GPLv3 offer the best patent protections; LGPLv3 offers the best among the weak copyleft licenses; Apache License 2.0 offers the best patent protections among the permissive licenses. These are the licenses we should gravitate toward, particularly since multiple companies with software patents are regularly attacking Free Software. At least when such companies contribute code to projects under these licenses, we know those particular codebases will be safe from that particular company's patents.
    • Demand real patent licenses from companies, not mere promises. Patent promises are not enough0. The Free Software community deserves to know it has real patent licenses from companies that hold patents. At the very least, we should demand unilateral patent licenses for all their patents perpetually for all possible copylefted code (i.e., companies should grant, ahead of time, the exact same license that the community would get if the company had contributed to a yet-to-exist GPLv3'd codebase)1. Note further that some companies, that claim to be part of the FLOSS community, haven't even given the (inadequate-but-better-than-nothing) patent promises. For example, BlackDuck holds a patent related to FLOSS, but despite saying they would consider at least a patent promise, have failed to do even that minimal effort.
    • Support organizations/efforts that work to oppose and end software patents. In particular, be sure that the efforts you support are not merely “patent reform” efforts hidden behind anti-software patent rhetoric. Here are a few initiatives that I've recently seen doing work regarding complete abolition of software patents. I suggest you support them (with your time or dollars):
    • Write your legislators. This never hurts. In the USA, it's unlikely we can convince Congress to change patent law, because there are just too many lobbying dollars from those big patent-holding companies (e.g., the same ones that wrote those nasty amicus briefs in Bilski). But, writing your Senators and Congresspeople once a year to remind them of your opposition patents that read on software simply can't hurt, and may theoretically help a tiny bit. Now would be a good time to do it, since you can mention how the Bilski decision convinced you there's a need for legislative abolition of software patents. Meanwhile, remember, it's even better if you show up at political debates during election season and ask these candidates to oppose software patents!
    • Explain to your colleagues why software patents should be abolished, particularly if you work in computing. Software patent abolition is actually a broad spectrum issue across the computing industry. Only big and powerful companies benefit from software patents. The little guy — even the little guy proprietary developer — is hurt by software patents. Even if you can't convince your colleagues who write proprietary software that they should switch to writing Free Software, you can instead convince them that software patents are bad for them personally and for their chances to succeed in software. Share the film, Patent Absurdity, with them and then discuss the issue with them after they've viewed it. Blog, tweet, dent, and the like about the issue regularly.
    • (added 2010-07-01 on tmarble's suggestion) Avoid products from pro-software-patent companies. This is tough to do, and it's why I didn't call for an all-out boycott. Most companies that make computers are pro-software-patent, so it's actually tough to buy a computer (or even components for one) without buying from a pro-software-patent company. However, avoiding the companies who are most aggressive with patent aggression is easy: starting with avoiding Apple products is a good first step (there are plenty of other reasons to avoid Apple anyway). Microsoft would be next on the list, since they specifically use software patents to attack FLOSS projects. Those are likely the big two to avoid, but always remember that all large companies with proprietary software products actively enforce patents, even if they don't file lawsuits. In other words, go with the little guy if you can; it's more likely to be a patent-free zone.
    • If you have a good idea, publish it and make sure the great idea is well described in code comments and documentation, and that everything is well archived by date. I put this one last on my list, because it's more of a help for the software patent reformists than it is for the software patent abolitionists. Nevertheless, sometimes, patents will get in the way of Free Software, and it will be good if there is strong prior art showing that the idea was already thought of, implemented, and put out into the world before the patent was filed. But, fact is, the “valid” software patents with no prior art are a bigger threat to software freedom. The stronger the patent, the worst the threat, because it's more likely to be innovative, new technology that we want to implement in Free Software.

    I sat and thought of what else I could add to this list that individuals can do to help abolish software patents. I was sad that these were the only five six things that I could collect, but that's all the more reason to do these five six things in earnest. The battle for software freedom for all users is not one we'll win in our lifetimes. It's also possible abolition of software patents will take a generation as well. Those of us that seek this outcome must be prepared for patience and lifelong, diligent work so that the right outcome happens, eventually.

    0 Update: I was asked for a longer write up on software patent licenses as compared to mere “promises”. Unfortunately, I don't have one, so the best I was able to offer was the interview I did on Linux Outlaws, Episode 102, about Microsoft's patent promise. I've also added a TODO to write something up more completely on this particular issue.

    1 I am not leaving my permissively-license-preferring friends out of this issue without careful consideration. Specifically, I just don't think it's practical or even fair to ask companies to license their patents for all permissively-licensed code, since that would be the same as licensing to everyone, including their proprietary software competitors. An ahead-of-time perpetual license to practice the teachings of all the company's patents under AGPLv3 basically makes sure that code that's eternally Free Software will also eternally be patent-licensed from that company, even if the company never contributes to the AGPLv3'd codebase. Anyone trying to make proprietary code that infringed the patent wouldn't have benefit of the license; only Free Software users, distributors and modifiers would have the benefit. If a company supports copyleft generally, then there is no legitimate reason for the company to refuse such a broad license for copyleft distributions and deployments.

    Posted on Wednesday 30 June 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-06-23: New Ground on Terminology Debate?

    (These days, ) I generally try to avoid the well-known terminology debates in our community. But, if you hang around this FLOSS world of ours long enough, you just can't avoid occasionally getting into them. I found myself in one this afternoon that spanned three identica threads. I had some new thoughts that I've shared today (and even previously) on my microblog. I thought it might be useful to write them up in one place rather than scattered across a series of microblog statements.

    I gained my first new insight into the terminology issues when I had dinner with Larry Wall in early 2001 after my Master's thesis defense. It was first time I talked with him about these issues of terminology, and he said that it sounded like a good place to apply what he called the “golden rule of network protocols”: Always be conservative in what you emit and liberal in what you accept. I've recently noted again that's a good rule to follow regarding terminology.

    More recently, I've realized that the FLOSS community suffers here, likely due to our high concentration of software developers and engineers. Precision in communication is a necessarily component of the lives of developers, engineers, computer scientists, or anyone in a highly technical field. In our originating fields, lack of precise and well-understood terminology can cause bridges to collapse or the wrong software to get installed and crash mission critical systems. Calling x by the name y sometimes causes mass confusion and failure. Indeed, earlier this week, I watched a PBS special, The Pluto Files, where Neil deGrasse Tyson discussed the intense debate about the planetary status of Pluto. I was actually somewhat relieved that a subtle point regarding a categorical naming is just as contentious in another area outside my chosen field. Watching the “what constitutes a planet” debate showed me that FLOSS hackers are no different than most other scientists in this regard. We all take quite a bit of pride in our careful (sometimes pedantic) care in terminology and word choice; I know I do, anyway.

    However, on the advocacy side of software freedom (the part that isn't technical), our biggest confusion sometimes stems from an assumption that other people's word choice is as necessarily as precise as ours. Consider the phrase “open source”, for example. When I say “open source”, I am referring quite exactly to a business-focused, apolitical and (frankly) amoral0 interest in, adoption of, and contribution to FLOSS. Those who coined the term “open source” were right about at least one thing: it's a term that fits well with for-profit interests who might otherwise see software freedom as too political.

    However, many non-business users and developers that I talk to quite clearly express that they are into this stuff precisely because there are principles behind it: namely, that FLOSS seeks to make a better world by giving important rights to users and programmers. Often, they are using the phrase “open source” as they express this. I of course take the opportunity to say: it's because those principles are so important that I talk about software freedom. Yet, it's clear they already meant software freedom as a concept, and just had some sloppy word choice.

    Fact is, most of us are just plain sloppy with language. Precision isn't everyone's forte, and as a software freedom advocate (not a language usage advocate), I see my job as making sure people have the concepts right even if they use words that don't make much sense. There are times when the word choices really do confuse the concepts, and there are other times when they don't. Sometimes, it's tough to identify which of the two is occurring. I try to figure it out in each given situation, and if I'm in doubt, I just simplify to the golden rule of network protocols.

    Furthermore, I try to have faith in our community's intelligence. Regardless of how people get drawn into FLOSS: be it from the moral software freedom arguments or the technical-advantage-only open source ones, I don't think people stop listening immediately upon their arrival in our community. I know this even from my own adoption of software freedom: I came for the Free as in Price, but I stayed for the Free as in Freedom. It's only because I couldn't afford a SCO Unix license in 1992 that I installed GNU/Linux. But, I learned within just a year why the software freedom was what mattered most.

    Surely, others have a similar introduction to the community: either drawn in by zero-cost availability or the technical benefits first, but still very interested to learn about software freedom. My goal is to reach those who have arrived in the community. I therefore try to speak almost constantly about software freedom, why it's a moral issue, and why I work every day to help either reduce the amount of proprietary software, or increase the amount of Free Software in the world. My hope is that newer community members will hear my arguments, see my actions, and be convinced that a moral and ethical commitment to software freedom is the long lasting principle worth undertaking. In essence, I seek to lead by example as much as possible.

    Old arguments are a bit too comfortable. We already know how to have them on autopilot. I admit myself that I enjoy having an old argument with a new person: my extensive practice often yields an oratorical advantage. But, that crude drive is too much about winning the argument and not enough about delivering the message of software freedom. Occasionally, a terminology discussion is part of delivering that message, but my terminology debate tools box has a “use with care” written on it.

    0 Note that here, too, I took extreme care with my word choice. I mean specifically amorality — merely an absence of any moral code in particular. I do not, by any stretch, mean immoral.

    Posted on Wednesday 23 June 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-06-11: Where Are The Bytes?

    A few years ago, I was considering starting a Free Software project. I never did start that one, but I learned something valuable in the process. When I thought about starting this project, I did what I usually do: ask someone who knows more about the topic than I do. So I phoned my friend Loïc Dachary, who has started many Free Software projects, and asked him for advice.

    Before I could even describe the idea, Loïc said: you don't have a URL? I was taken aback; I said: but I haven't started yet. He said: of course you have, you're talking to me about it, so you've started already. The most important thing you can tell me, he said, is Where are the bytes?

    Loïc explained further: Most projects don't succeed. The hardest part about a software freedom project is carrying it far enough so it can survive even if its founders quit. Therefore, under Loïc's theory, the most important task at the project's start is to generate those bytes, in hopes those bytes find their way to the a group of developers who will help keep the project alive.

    But, what does he mean by “bytes”? He means, quite simply, that you have to core dump your thinking, your code, your plans, your ideas, just about everything on a public URL that everyone can take a look at. Push bytes. Push them out every time you generate a few. It's the only chance your software freedom project has.

    The first goal of a software freedom project is to gain developers. No project can have long-term success without a diverse developer base. The problem is, the initial development work and project planning too often ends up trapped in the head of a few developers. It's human nature: How can I spend my time telling everyone about what I'm doing? If I do that, when will I actually do anything? Successful software freedom project leaders resist this human urge and do the seemingly counterintuitive thing: they dump their bytes on the public, even if it slows them down a bit.

    This process is even more essential in the network age. If someone wants to find a program that does a job, the first tool is a search engine: to find out if someone else has done it yet. Your project's future depends completely that every such search performed helps developers find your bytes.

    In early 2001, I asked Larry Wall, of all the projects he'd worked on, which was the hardest. His answer was quick: when I was developing the first version of perl5, Larry said, I felt like I had to code completely alone and just make it work by myself. Of course, Larry's a very talented guy who can make that happen: generate something by himself that everyone wanted to use. While I haven't asked him what he'd do in today's world if he was charged with a similar task, I can guess — especially given at how public the Perl6 process has been — that he'd instead use the new network tools, such as DVCS, to push his bytes early and often and seek to get more developers involved early.0

    Admittedly, most developers' first urge is to hide everything. We'll release it when it's ready, is often heard, or — even worse — Our core team works so well together; it'll just slow us down to make things public now. Truth is, this is a dangerous mixture of fear and narcissism — the very same drives that lead proprietary software developers to keep things proprietary.

    Software freedom developers have the opportunity to actually get past the simple reality of software development: all code sucks, and usually isn't complete. Yet, it's still essential that the community see what's going on at ever step, from the empty codebase and beyond. When a project is seen as active, that draws in developers and gives the project hope of success.

    When I was in college, one of the teams in a software engineering class crashed and burned; their project failed hopelessly. This happened despite one of the team members spending about half the semester up long nights, coding by himself, ignoring the other team members. In their final evaluation, the professor pointed out: Being a software developer isn't like being a fighter pilot. The student, missing the point, quipped: Yeah, I know, at least a fighter pilot has a wingman. Truth is, one person, or two people, or even a small team, aren't going to make a software freedom project succeed. It's only going to succeed when a large community bolsters it and prevents any single point of failure.

    Nevertheless, most software freedom projects are going to fail. But, there is no shame in pushing out a bunch of bytes, encouraging people to take a look, and giving up later if it just doesn't make it. All of science works this way, and there's no reason computer science should be any different. Keeping your project private assures its failure; the only benefit is that you can hide that you even tried. As my graduate advisor told me when I was worried my thesis wasn't a success: a negative result can be just as compelling as a positive one. What's important is to make sure all results are published and available for public scrutiny.

    When I started discussing this idea a few weeks ago, some argued that early GNU programs — the founding software of our community — were developed in private initially. This much is true, but just because GNU developers once operated that way doesn't mean it was the right way. We have the tools now to easily do development in public, so we should. In my view, today, it's not really in the spirit of software freedom until the project, including its design discussions, plans, and prototypes are all developed in public. Code (regardless of its license) merely dumped over the wall on intervals deserves to be forked by a community committed to public development.

    Update (2010-06-12): I completely forgot to mention The Risks of Distributed Version Control by Ben Collins-Sussman, which is five years old now but still useful. Ben is making a similar point to mine, and pointing out how some uses of DVCS can cause the effects that I'm encouraging developers to avoid. I think DVCS is like any tool: it can be used wrongly. The usage Ben warns about should be avoided, and DVCS, when used correctly, assists in the public software development process.

    0Note that pushing code out to the public in the mid-1990s was substantially more arduous (from a technological perspective) than it is today. Those of you who don't remember shar archives may not realize that. :)

    Posted on Friday 11 June 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-05-08: Beware of Proprietary Drift

    The Free Software Foundation (FSF) announced yesterday a campaign to collect a clear list of OpenOffice.Org extensions that are FaiF, to convince the OO.o Community Council to list only FaiF extensions, and to find those extensions that are proprietary software, so that OO.o extension developers can focus of their efforts on writing replacements under a software-freedom-respecting license.

    I use OpenOffice.Org (OO.o) myself only when someone else sends me a document in that format; I'm a LaTeX, DocBook, MarkDown, or HTML user for documents I originate. Nevertheless, I'm obviously a rare sort of software user, and I understand that OO.o is a program many people use. Plus, a program like OO.o is extremely large, with a diverse user base, so extension-style improvement, from a technological perspective, makes sense to meet all the users' requirements.

    Unfortunately, the social impact of a program designed this way causes danger for software freedom. It sometimes causes a chain of events that I call “proprietary drift” — a social phenomena that leads otherwise FaiF codebases to slowly become, in their default use, mostly proprietary packages, at least with regard the features users find most important and necessary.

    Copyleft itself was originally designed to address this problem: to make sure that improved versions of packages were available with as much software freedom as the original. Copyleft isn't a perfect solution to reach this goal, and furthermore many essential software freedom codebases are under weak copyleft and/or permissive licenses. Such is the case with OO.o, and the proprietary drift of the codebase is thus of great concern here.

    For those of us that have the goal of building a world where software freedom is given for all published and deployed software, this problem of proprietary drift is a terrible threat. In many ways, it's even a worse threat than the marketing and production of fully proprietary software. This may seem a bit counter-intuitive on its surface; logic would seem to dictate that some software freedom is better than none, and therefore an OO.o user with a few proprietary extensions installed is better off than a Microsoft Word user. And, in fact, none of that is false.

    However, the situation introduces a complexity. In short, it can inspire a “good enough” reaction among users. Particularly for users who have generally used only proprietary software, the experience of using a package that mostly respects software freedom can be incredibly liberating. When 98% of your software is FaiF-licensed, you sometimes don't notice the 2% that isn't. Over time, the 2% goes up to 3%, then 4%. This proprietary drift will often lead back to a system not that much different from (for example) Apple's operating system, which has a permissively-licensed software freedom core, but most of the system is very much proprietary. In other words, in the long term, proprietary drift leads to mostly proprietary systems.

    Sometimes, I and other software freedom advocates are criticized for giving such a hard time to those who are seemingly closest to our positions. Often, this is because the threat of proprietary drift is so great. Concern about proprietary drift is, at least in large part, the inspiration for positions opposing UbuntuOne, for the Linux Libre project, and for this this new initiative to catalog the FaiF OO.o extensions and rewrite the proprietary ones. We all agree that purely proprietary software programs like those from Apple, Microsoft, and Oracle are the greatest threat to software freedom in the short term. But, in the long term, proprietary drift has the potential to creep up on users who prefer software freedom. You may never see it coming if you aren't constantly vigilant.

    [There's a derivative version of this article available in Arabic. I can't personally attest to the accuracy of the translation, as I can't read Arabic, but osamak, the translator, is a good guy.]

    Disclaimer: While I am a member of FSF's Board of Directors, and I believe the positions stated above are consistent with FSF's positions, the opinions are not necessarily those of the FSF even though I refer to various FSF-sponsored initiatives. Furthermore, this remains my personal blog and the opinions certainly do not express those of my employer nor those of any other organization or project for which I volunteer.

    Posted on Saturday 08 May 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-04-21: Launchpad Single Sign On Released

    I wrote 15 months ago thanking Canonical for their release of Launchpad. However, in the interim, a part of the necessary codebase was made proprietary, namely the authentication system used in the canonical instance of Launchpad hosted by Canonical. (Yes, I still insist on using canonical in the canonical way despite the company name making it confusing. :). I added this fact to my list of reasons of abandoning Ubuntu and other Canonical products.

    Fortunately, I've now removed this reason from the list of reasons I switched back to Debian from Ubuntu, since Jono Bacon announced release of this code today. According to Jono, this release means that Launchpad and its dependencies are again fully Free Software. This is a step forward. And, I did promise many people at Canonical that I'd make a point about thanking them for doing Free Software releases when they do them, since I do make a point of calling them out about negative things they do.

    Like any mixed proprietary/Free Software company, there is tons more to be released. I remain most concerned about UbuntuOne's server side code, but I very much hope this release today marks a bounce-back for Canonical to its roots in the 100% Free Software world.

    Posted on Wednesday 21 April 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-04-07: Proprietary Licenses Are Even Worse Than They Look

    There are lots of evil things that proprietary software companies might do. Companies put their own profit above the rights and freedoms of their users, and to that end, much can be done that subjugates users. Even as someone who avoids proprietary software, I still read many proprietary license agreements (mainly to see how bad they are). I've certainly become numb to the constant barrage of horrible restrictions they place on users. But, sometimes, proprietary licenses go so far that I'm taken aback by their gratuitous cruelty.

    Apple's licenses are probably the easiest example of proprietary licensing terms that are well beyond reasonableness. Of course, Apple's licenses do the usual things like forbidding users from copying, modifying, sharing, and reverse engineering the software. But even worse, Apple also forbid users from running Apple software on any hardware that is not produced by Apple.

    The decoupling of one's hardware vendor from one's software vendor was a great innovation brought about by the PC revolution, in which, ironically, Apple played a role. Computing history has shown us that when your software vendor also controls your hardware, you can easily be “locked in“ in ways that make mundane proprietary software licenses seem almost nonthreatening.

    Film image from Tron of the Master Control Program (MCP)

    Indeed, Apple has such a good hype machine that they even have convinced some users this restrictive policy makes computing better. In this worldview, the paternalistic vendor will use its proprietary controls over as many pieces of the technology as possible to keep the infantile users from doing something that's “just bad for them”. The tyrannical MCP of Tron comes quickly to my mind.

    I'm amazed that so many otherwise Free Software supporters are quite happy using OSX and buying Apple products, given these kinds of utterly unacceptable policies. The scariest part, though, is that this practice isn't confined to Apple. I've been recently reminded that other companies, such as IBM, do exactly the same thing. As a Free Software advocate, I'm critical of any company that uses their control of a proprietary software license to demand that users run that software only on the original company's hardware as well. The production and distribution of mundane proprietary software is bad enough. It's unfortunate that companies like Apple and IBM are going the extra mile to treat users even worse.

    Posted on Wednesday 07 April 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.


  • 2010-03-26: LibrePlanet 2010 Completes Its Orbit

    Seven and a half years ago, I got this idea: the membership of the Free Software Foundation should have a chance to get together every year and learn about what the FSF has been doing for the last year. I was so nervous at the first one on Saturday 15 March 2003, that I even wore a suit which I rarely do.

    The basic idea was simple: the FSF Board of Directors came into town anyway each March for the annual board meeting. Why not give a chance for FSF associate members to meet the leadership and staff of FSF and ask hard questions to their hearts' content? I'm all about transparency, as you know. :)

    Since leaving the position of Executive Director a few months before the 2005 meeting, I've attended every annual meeting, just as an ordinary Associate Member and FSF volunteer. It's always enjoyable to attend a conference organized by someone else that you used to help organize; it's like, after having done sysadmin work for other people for years, to have someone keep a machine running and up to date just for you. It's been wonderful to watch the FSF AM meeting grow into a full-fledged conference for discussion and collaboration between folks from all over the Free Software world. “One room, one track, one day” has become “five rooms, three tracks, and three days” with the proverbial complaint throughout: But, why do I have to miss this great session so that I can go to some other great session!?!

    Some highlights for me this year were:

    • I saw John Gilmore win a well-deserved FSF Award for the Advancement of Free Software.
    • I got to spend time with the intrepid gnash developer Rob Savoye again, whom I knew of for years (his legend precedes him) but I'd rarely had a chance to see in person regularly, until lately.
    • I met so many young people excited about software freedom. I can only imagine to be only 19 or 20 years old and have the opportunity meet other Free Software developers in person. At that age, I considered myself lucky to simply have Usenet access so that I could follow and participate in online discussions about Free Software (good ol' gnu.misc.discuss ;). I am so glad that young folks, some from as far away as Brazil, had the opportunity to visit and speak about their work.
    • On the informal Friday sessions, I was a bit amazed that I pulled off a marathon six-hour session of mostly well-received talks/discussions (for which I readily admit I had not prepped well). The first three hours was about the challenges of software freedom on mobile devices, and the second three were about the nitty-gritty details of the hardest and most technical GPL enforcement task: the C&CS check. People seemed to actually enjoy watching me break half my Fedora chroots trying to build some source code for a plasma television. Someone even told me later: it was more fun because we got to see you make all the mistakes.
    • Finally (and I realize I've probably buried the lede here, but I've kept the list chronological, since I wrote most of it before I found out this last thing), after the FSF Board meeting, which followed LibrePlanet, I was informed by a phone call from my good friend Henry Poole that I'd been elected to FSF's Board of Directors, which has now been announced by FSF on Peter Brown's blog. I've often told the story that when I first learned about the FSF as a young programmer and sysadmin, I thought that someday, maybe I could be good enough to get a job as a sysadmin for the FSF. I did indeed volunteer as a sysadmin for the FSF starting around 1996, but I truly felt I'd exceeded any possible dream when I was later named FSF's Executive Director, and was able to serve in that post for so many years. Now, being part of the Board of Directors is an even greater opportunity for involvement in the organization that I've loved and respected for so long.

    FSF is an organization based around a very simple, principled idea: that users and programmers alike deserve inalienable rights to copy, share, modify, and redistribute all the software that they use. This issue isn't merely about making better software (although Free Software developers usually do, anyway); it's about a principle of morality: everyone using computers should be treated well and be given the maximal opportunity to treat their neighbors well, too. Helping make this simple idea into reality is the center of all the work I've done for the last 12 years of my life, and I expect it will be the focus of my (hopefully many) remaining years. I am thankful that the Voting Members of FSF have given me this additional opportunity to help our shared cause. I plan to work hard in this and all the other responsibilities that I already have to our Free Software community. Like everyone on FSF's Board of Directors, I serve in that role completely as a volunteer, so in some ways I feel this is just a natural extension of the volunteer work I've continued to do for the FSF regularly since I left its employment in 2005.

    Finally, I was glad to meet (or meet again) so many FSF supporters at LibrePlanet, and I deeply hope that I can serve our shared goal well in this additional role.

    Posted on Friday 26 March 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-03-15: Is Your Support of Copyleft Logically Consistent?

    Most of you are aware from one of my previous posts that It's a Wonderful Life! is my favorite film. Recently, I encountered something in the software freedom community that reminded me of yet another quote from the flim:

    Picture of George Bailey whispering to Clarence at the bar

    Look, uh … I think maybe you better not mention getting your wings around here.
    Why? Don't they believe in angels?
    I… yeah, they believe in them…
    Ohhh … Why should they be surprised when they see one?

    Obviously, I don't believe in angels myself. But, Clarence's (admittedly naïve) logic is actually impeccable: Either you believe in angels or you don't. If you believe in angels, then you shouldn't be surprised to (at least occasionally) see one.

    This film quote came to my mind in reference to a concept in GPL enforcement. Many people give lip service to the idea that the GPL, and copyleft generally, is a unique force that democratizes software and ensures that FLOSS cannot be exploited by proprietary software interests. Many of these same people, though, oppose GPL enforcement when companies exploit GPL'd code and don't give the source code and take away users' rights to modify and share that software.

    I've admitted that the copyleft is merely a strategy to achieve maximal software freedom. There are other strategies too, such as the Apache community process. The Apache Software Foundation releases software under a permissive non-copyleft license, but then negotiates with companies to convince them to contribute to the code base publicly. For some projects, that strategy has worked well, and I respect it greatly.

    Some (although not all) people in non-copyleft FLOSS communities (like the Apache community) are against GPL enforcement. I disagree with them, but their position is logically consistent. Such folks don't agree with us (copyleft-supporting folks) that a license should be used as a mechanism to guarantee that all published and deployed improved versions of the software are released in software freedom. It's not that those other folks don't prefer FLOSS; they simply prefer a non-legally binding social pressure to encourage software sharing rather than a strategy with legal backup. I prefer a strategy with legal strength, but I still respect non-copyleft folks who don't support that. They take a logically consistent and reasonable approach.

    However, it's ultimately hypocritical to claim support for a copyleft structure but oppose GPL enforcement. If you believe the license should have a legal requirement that ensures software is always distributed in software freedom, then why would you be surprised — or, even worse, angry — that a copyright holder would seek to uphold users' rights when that license is violated?

    There is great value in having multiple simultaneous strategies ongoing to achieve important goals. Universal software freedom is my most important goal, and I expect to spend nearly all of my life focused on achieving it for all published and deployed software in the world. However, I don't expect nor even want everyone else to single-minded-ly support my exact same strategies in all cases. The diversity of the software freedom community makes it more likely that we'll succeed if we avoid single point of failure on any particular plan, and I support that diversity.

    However, I also think it's reasonable to expect logically consistent positions. A copyleft license is effectively indistinguishable from the Apache license if copyleft is never enforced when violations occur. Condemning community-oriented0 GPL enforcement (that seeks primarily to get the code released) while also claiming to support the idea of copyleft is a logically inconsistent and self-contradictory position. It's unfortunate that so many people hold this contradictory position.

    0There are certain types of GPL enforcement that are not consistent with the goal of universal software freedom. For example, some so-called “Open Core” companies are well known for releasing their (solely) copyrighted code under GPL, and then using GPL enforcement as a mechanism to pressure users to take a proprietary license. GPL enforcement is only acceptable in my view if its primary goal is to have all code released under GPL. Such enforcement must never compromise about one point: that compliance with the GPL is a non-negotiable term of settling the enforcement action. If the enforcer is willing to sell out the rights that users' have to source code, then even I would condemn, as I have previously, such GPL enforcement as bad for the software freedom community. For this reason, in all GPL enforcement that I engage in, I make it a term of my participation that compliance with the terms of the GPL for the code in question be a non-negotiable requirement.

    Posted on Monday 15 March 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-03-05: Ok, Be Afraid if Someone's Got a Voltmeter Hooked to Your CPU

    Boy, do I hate it when a FLOSS project is given a hard time unfairly. I was this morning greeted with news from many places that OpenSSL, one of the most common FLOSS software libraries used for cryptography, was somehow severely vulnerable.

    I had a hunch what was going on. I quickly downloaded a copy of the academic paper that was cited as the sole source for the story and read it. As I feared, OpenSSL was getting some bad press unfairly. One must really read this academic computer science article in the context it was written; most commenting about this paper probably did not.

    First of all, I don't claim to be an expert on cryptography, and I think my knowledge level to opine on this subject remains limited to a little blog post like this and nothing more. Between college and graduate school, I worked as a system administrator focusing on network security. While a computer science graduate student, I did take two cryptography courses, two theory of computation courses, and one class on complexity theory0. So, when compared to the general population I probably am an expert, but compared to people who actually work in cryptography regularly, I'm clearly a novice. However, I suspect many who have hitherto opined about this academic article to declare this severe vulnerability have even less knowledge than I do on the subject.

    This article, of course, wasn't written for novices like me, and certainly not for the general public nor the technology press. It was written by and for professional researchers who spend much time each week reading dozens of these academic papers, a task I haven't done since graduate school. Indeed, the paper is written in a style I know well; my “welcome to CS graduate school” seminar in 1997 covered the format well.

    The first thing you have to note about such papers is that informed readers generally ignore the parts that a newbie is most likely focus on: the Abstract, Introduction and Conclusion sections. These sections are promotional materials; they are equivalent to a sales brochure selling you on how important and groundbreaking the research is. Some research is groundbreaking, of course, but most is an incremental step forward toward understanding some theoretical concept, or some report about an isolated but interesting experimental finding.

    Unfortunately, these promotional parts of the paper are the sections that focus on the negative implications for OpenSSL. In the rest of the paper, OpenSSL is merely the software component of the experiment equipment. They likely could have used GNU TLS or any other implementation of RSA taken from a book on cryptography1. But this fact is not even the primary reason that this article isn't really that big of a deal for daily use of cryptography.

    The experiment described in the paper is very difficult to reproduce. You have to cause very subtle faults in computation at specific times. As I understand it, they had to assemble a specialized hardware copy of a SPARC-based GNU/Linux environment to accomplish the experiment.

    Next, the data generated during the run of the software on the specially-constructed faulty hardware must be collected and operated upon by a parallel processing computing environment over the course of many hours. If it turns out all the needed data was gathered, the output of this whole process is the private RSA key.

    The details of the fault generation process deserve special mention. Very specific faults have to occur, and they can't occur such that any other parts of the computation (such as, say, the normal running of the operating system) are interrupted or corrupted. This is somewhat straightforward to get done in a lab environment, but accomplishing it in a production situation would be impractical and improbable. It would also usually require physical access to the hardware holding the private key. Such physical access would, of course, probably give you the private key anyway by simply copying it off the hard drive or out of RAM!

    This is interesting research, and it does suggest some changes that might be useful. For example, if it doesn't slow a system down too much, the integrity of RSA signatures should be verified, on a closely controlled proxy unit with a separate CPU, before sending out to a wider audience. But even that would be a process only for the most paranoid. If faults are occurring on production hardware enough to generate the bad computations this cracking process relies on, likely something else will go wrong on the hardware too and it will be declared generally unusable for production before an interloper could gather enough data to crack the key. Thus, another useful change to make based on this finding is to disable and discard RSA keys that were in use on production hardware that went faulty.

    Finally, I think this article does completely convince me that I would never want to run any RSA computations on a system where the CPU was emulated. Causing faults in an emulated CPU would only require changes to the emulation software, and could be done with careful precision to detect when an RSA-related computation was happening, and only give the faulty result on those occasions. I've never heard of anyone running production cryptography on an emulated CPU, since it would be too slow, and virtualization technologies like Xen, KVM, and QEMU all pass-through CPU instructions directly to hardware (for speed reasons) when the virtualized guest matches the hardware architecture of the host.

    The point, however, is that proper description of the dangers of a “security vulnerability” requires more than a single bit field. Some security vulnerabilities are much worse than others. This one is substantially closer to the “oh, that's cute” end of the spectrum, not the “ZOMG, everyone's going to experience identity theft tomorrow” side.

    0Many casual users don't realize that cryptography — the stuff that secures your networked data from unwanted viewers — isn't about math problems that are unsolvable. In fact, it's often based on math problems that are trivially solvable, but take a very long time to solve. This is why algorithmic complexity questions are central to the question of cryptographic security.

    1 I'm oversimplifying a bit here. A key factor in the paper appears to be the linear time algorithm used to compute cryptographic digital signatures, and the fact that the signatures aren't verified for integrity before being deployed. I suspect, though, that just about any RSA system is going to do this. (Although I do usually test the integrity of my GnuPG signatures before sending them out, I do this as a user by hand).

    Posted on Friday 05 March 2010 by Bradley M. Kuhn.

    Comment on this post in this conversation.

  • 2010-03-04: Musings on Software Freedom for Mobile Devices

    I started using GNU/Linux and Free Software in 1992. In those days, while everything I needed for a working computer was generally available in software freedom, there were many components and applications that simply did not exist. For highly technical users who did not need many peripherals, the Free Software community had reached a state of complete software freedom. Yet, in 1992, everyone agreed there was still much work to be done. Even today, we still strive for a desktop and server operating system, with all relevant applications, that grants complete software freedom.

    Looked at broadly, mobile telephone systems are not all that different from 1992-era GNU/Linux systems. The basics are currently available as Free, Libre, and Open Source Software (FLOSS). If you need only the bare minimum of functionality, you can, by picking the right phone hardware, run an almost completely FLOSS operating system and application set. Yet, we have so far to go. This post discusses the current penetration of FLOSS in mobile devices and offers a path forward for free software advocates.

    A Brief History

    The mobile telephone market has never functioned like the traditional computer market. Historically, the mobile user made arrangements with some network carrier through a long-term contract. That carrier “gave” the user a phone or discounted it as a loss-leader. Under that system, few people take their phone hardware choice all that seriously. Perhaps users pay a bit more for a slightly better phone, but generally they nearly always pick among the limited choices provided by the given carrier.

    Meanwhile, Research in Motion was the first to provide corporate-slave-oriented email-enabled devices. Indeed, with the very recent focus on consumer-oriented devices like the iPhone, most users forget that Apple is by far not the preferred fruit for the smart phone user. Today, most people using a “smart phone” are using one given to them by their employer to chain them to their office email 24/7.

    Apple, excellent at manipulating users into paying more for a product merely because it is shiny, also convinced everyone that now a phone should be paid for separately, and contracts should go even longer. The “race to mediocrity” of the phone market has ended. Phones need real features to stand out. Phones, in fact, aren't phones anymore. They are small mobile computers that can also make phone calls.

    If these small computers had been introduced in 1992, I suppose I'd be left writing the Mobile GNU Manifesto, calling for developers to start from scratch writing operating systems for these new computers, so that all users could have software freedom. Fortunately, we have instead been given a head start. Unlike in 1992, not every company in the market today is completely against releasing Free Software. Specifically, two companies have seen some value in releasing (some parts of) phone operating systems as Free Software: Nokia and Google. However, the two companies have done this for radically different reasons.

    The Current State of Mobile Software Freedom

    For its part, Nokia likely benefited greatly from the traditional carrier system. Most of their phones were provided relatively cheaply with contracts. Their interest in software freedom was limited and perhaps even non-existent. Nokia sold new hardware every time a phone contract was renewed, and the carrier paid the difference between the loss-leader price and Nokia's wholesale cost. The software on the devices was simple and mostly internally developed. What incentive did Nokia have to release software in software freedom? (Nokia realized too late this was the wrong position, but more on that later.)

    In parallel, Nokia had chased another market that I've never fully understood: the tablet PC. Not big enough to be a real computer, but too large to be a phone, these devices have been an idea looking for a user base. Regardless of my personal views on these systems, though, GNU/Linux remains the ideal system for these devices, and Nokia saw that. Nokia built the Debian-ish Maemo system as a tablet system, with no phone. However, I can count on one hand all the people I've met who bothered with these devices; I just don't think a phone-less small computer is going to ever become the rage, even if Apple dumps billions into marketing the iPad. (Anyone remember the Newton?)

    I cannot explain, nor do I even understand, why Nokia took so long to use Maemo as a platform for a tablet-like telephone. But, a few months ago, they finally released one. This N900 is among only a few available phones that make any strides toward a fully free software phone platform. Yet, the list of proprietary components required for operation remains quite long. The common joke is that you can't even charge the battery on your N900 without proprietary software.

    While there are surely people inside Nokia who want more software freedom on their devices, Nokia is fundamentally a hardware company experimenting with software freedom in hopes that it will bolster hardware sales. Convincing Nokia to shorten that proprietary list will prove difficult, and the community based effort to replace that long list with FLOSS (called Mer) faces many challenges. (These challenges will likely increase with the recent Maemo merger with Moblin to form MeeGo).

    Fortunately, hardware companies are not the only entity interested in phone operating systems. Google, ever-focused on routing human eyes to its controlled advertising, realizes that even more eyes will be on mobile computing platforms in the future. With this goal in mind, Google released the Android/Linux system, now available on a variety of phones in varying degrees of software freedom.

    Google's motives are completely different than Nokia's. Technically, Google has no hardware to sell. They do have a set of proprietary applications that yield the “Google online experience” to deliver Google's advertising. From Google's point of view, an easy-to-adopt, licensing-unencumbered platform will broaden their advertising market.

    Thus, Android/Linux is a nearly fully non-copylefted phone operating system platform where Linux is the only GPL licensed component essential to Android's operation. Ideally, Google wants to see Android adopted broadly in both Free Software and mixed Free/proprietary deployments. Google's goals do not match that of the software freedom community, so in some cases, a given Android/Linux device will give the user more software freedom than the N900, but in many cases it will give much less.

    The HTC Dream is the only Android/Linux device I know of where a careful examination of the necessary proprietary components have been analyzed. Obviously, the “Google experience” applications are proprietary. There also are about 20 hardware interface libraries that do not have source code available in a public repository. However, when lined up against the N900 with Maemo, Android on the HTC Dream can be used as an operational mobile telephone and 3G Internet device using only three proprietary components: a proprietary GSM firmware, proprietary Wifi firmware, and two audio interface libraries. Further proprietary components are needed if you want a working accelerometer, camera, and video codecs as their hardware interface libraries are all proprietary.

    Based on this analysis, it appears that the HTC Dream currently gives the most software freedom among Android/Linux deployments. It is unlikely that Google wants anything besides their applications to be proprietary. While Google has been unresponsive when asked why these hardware interface libraries are proprietary, it is likely that HTC, the hardware maker with whom Google contracted, insisted that these components remain proprietary, and perhaps fear patent suits like the one filed this week are to blame here. Meanwhile, while no detailed analysis of the Nexus One is yet available, it's likely similar to the HTC Dream.

    Other Android/Linux devices are now available, such as those from Motorola and Samsung. There appears to have been no detailed analysis done yet on the relative proprietary/freeness ratio of these Android deployments. One can surmise that since these devices are from traditionally proprietary hardware makers, it is unlikely that these platforms are freer than those available from Google, whose maximal interest in a freely available operating system is clear and in contrast to the traditional desires of hardware makers.

    Whether the software is from a hardware-maker desperately trying a new hardware sales strategy, or an advertising salesman who wants some influence over an operating system choice to improve ad delivery, the software freedom community cannot assume that the stewards of these codebases have the interests of the user community at heart. Indeed, the interests between these disparate groups will only occasionally be aligned. Community-oriented forks, as has begun in the Maemo community with Mer, must also begin in the Android/Linux space too. We are slowly trying with the Replicant project, founded by myself and my colleague Aaron Williamson.

    A healthy community-oriented phone operating system project will ultimately be an essential component to software freedom on these devices. For example, consider the fate of the Mer project now that Nokia has announced the merger of Maemo with Moblin. Mer does seek to cherry-pick from various small device systems, but its focus was to create a freer Maemo that worked on more devices. Mer now must choose between following the Maemo in the merge with Moblin, or becoming a true fork. Ideally, the right outcome for software freedom is a community-led effort, but there may not be enough community interest, time and commitment to shepherd a fork while Intel and Nokia push forward on a corporate-controlled codebase. Further, Moblin will likely push the MeeGo project toward more of a tablet-PC operating system than a smart phone.

    A community-oriented Android/Linux fork has more hope. Google has little to lose by encouraging and even assisting with such forks; such effort would actually be wholly consistent with Google's goals for wider adoption of platforms that allow deployment of Google's proprietary applications. I expect that operating system software-freedom-motivated efforts will be met with more support from Google than from Nokia and/or Intel.

    However, any operating system, even a mobile device one, needs many applications to be useful. Google experience applications for Android/Linux are merely the beginning of the plethora of proprietary applications that will ultimately be available for MeeGo and Android/Linux platforms. For FLOSS developers who don't have a talent for low-level device libraries and operating system software, these applications represent a straightforward contribution towards mobile software freedom. (Obviously, though, if one does have talent for low-level programming, replacing the proprietary .so's on Android/Linux would be the optimal contribution.)

    Indeed, on this point, we can take a page from Free Software history. From the early 1990s onward, fully free GNU/Linux systems succeeded as viable desktop and server systems because disparate groups of developers focused simultaneously on both operating systems and application software. We need that simultaneous diversity of improvement to actually compete with the fully proprietary alternatives, and to ensure that the “mostly FLOSS” systems of today are not the “barely FLOSS” systems of tomorrow.

    Careful readers have likely noticed that I have ignored Nokia's other release, the Symbian> codebase. Every time I write or speak about the issues of software freedom in mobile devices, I'm chastised for leaving it out of the story. My answer is always simple: when a FLOSS version of Symbian can be compiled from source code, using a FLOSS compiler or SDK, and that binary can be installed onto an actual working mobile phone device, then (and only then) will I believe that the Symbian source release has value beyond historical interest. We have to get honest as a community about the future of Symbian: it's a ten-year-old proprietary codebase designed for devices of that era that doesn't bootstrap with any compilers our community uses regularly. Unless there's a radical change to these facts, the code belongs in a museum, not running on a phone.

    Also, lest my own community of hard-core FLOSS advocates flame me, I must also mention the Neo FreeRunner device and the OpenMoko project. This was a noble experiment: a freely specified hardware platform running 100% FLOSS. I used an OpenMoko FreeRunner myself, hoping that it would be the mobile phone our community could rally around. I do think the device and its (various) software stack(s) have a future as an experimental, hobbyist devi