data sovereignty  

U.S. Republican congressional staff said in a report released Wednesday that previous efforts to regulate privacy technology were flawed and that lawmakers need to learn more about technology before trying to regulate it. The 25-page white paper is entitled Going Dark, Going Forward: A Primer on the Encryption Debate and it does not provide any solution to the encryption fight. However, it is notable for its criticism of other lawmakers who have tried to legislate their way out of the encryption debate. It

Source: US Efforts To Regulate Encryption Have Been Flawed, Government Report Finds – Slashdot

A saving grace for students across the world, Alexandra Elbakyan’s portal, Sci-Hub, pools millions of expensive academic papers published in online journals for free. At the center of a potentially multi-billion dollar court battle with US courts, she has vowed to continue her work. “There should be no obstacles to accessing knowledge, I believe,” she told RT in an email interview, echoing her earlier reference to Article 27 of the United Nations Declaration of Human Rights: that “everyone has the right

Source: EXCLUSIVE: Robin Hood neuroscientist behind Sci-Hub research-pirate site talks to RT — RT News

The dark web actually has promise. In essence, it’s the World Wide Web as it was originally envisioned.

Looking beyond the scaremongering, however, the dark web actually has promise. In essence, it’s the World Wide Web as it was originally envisioned: a space beyond the control of individual states, where ideas can be exchanged freely without fear of being censored. As countries continue to crack down on the web, its dark counterpart is only going to become more relevant as a place to discuss and connect with each other. We shouldn’t let the myth of the dark web ruin that potential.

Source: The Dark Web as You Know It Is a Myth

This Kat sometimes wonders whether every big copyright dispute these days seems to have a major political or philosophical subtext to it — an example of which can be found below. From guest contributor Emma Perot comes this appraisal of a dispute (reported on TorrentFreak here) between a giant publisher of valuable and useful scholarly material on the one hand, and those who seek access to that same information on the other. Writes Emma: In a Robin Hood-like manner, has been providing academic articles to researchers in the science and technology community free of charge since 2011. Now Elsevier, one of the largest academic publishers, is seeking to put an end to this open access model. Elsevier publishes over 2,000 journals and has an income of more than US$1 billion. Wielding its dominance in the research community, Elsevier charges US$30 to access an article. This is a staggering price when you consider how many articles are needed in order to undertake significant research. In the UK, universities generally pay subscriber fees so that students and staff can access journals. However, this is not the case for everyone. Alexandra Elbakyan is one researcher who could not access Elsevier’s journals because the University of Kazakhstan did not subscribe to the service. In order to progress with her research project, she found forums that facilitated the sharing of articles for free. Elbakyan realised that there were many others like herself who were jumping through hoops for their research. From this necessity sprang the creation of which collects journal articles and makes them available to the public without charge. The problem that SciHub is now facing is that the copyright of many of the articles they have published vests in Elsevier. As stipulated by the terms and conditions of publication, authors assign their exclusive rights (s.106 U.S. Copyright Act 1976) to the publisher. As such, Elsevier is entitled to charge whatever access fee they desire, or to restrict access all together. By reproducing these articles without Elsevier’s permission, Sci-Hub is infringing Elsevier’s copyright and is likely to lose the case against it. Nonetheless, Elbakyan is insistent on fighting for continued open access as she believes that “Everyone should have access to knowledge regardless of their income or affiliation”. The author is sympathetic to Elbakyan’s stance and believes that her moral argument is compelling, if not viable under the current capitalist regime. The history of copyright protection reveals an idealistic beginning which better accords with Elbakyan’s philosophy. Copyright protection in the U.S has a foundation in s.8 of the U.S. Constitution which states that “The Congress shall have power … to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” This clause underpins copyright with utilitarianism by providing an incentive of control to authors. The purpose of this control is to encourage (but not guarantee) the creation of products which will contribute to the growth of society. The main criticism of the incentive theory is that people create works even in the absence of intellectual property protection. This seems apparent on the facts before us as authors who publish with Elsevier surrender their copyright protection at the first possible opportunity. Even if the control incentive (there are many other forms of incentive such as reputation building, money, and pure interest) were necessary to encourage research, the utilitarian philosophy does not bode well in a capitalist society where publishers such as Elsevier operate to make a profit rather than to further the altruistic goal of disseminating information. Pay wall or ordinary wall? They’re all the same to Hubert Different approaches can be taken to overcome the barriers presented by legal paywalls. One such approach is to publish in independent, open access journals. The problem with this is that researchers want the benefit of the prestige associated with well-established, peer-reviewed journals. While this may seem like an egotistical issue, researchers spend years trying to develop a reputation of excellence in order to be presented with more opportunities for advancement. Publishing in a well-respected journal ensures quality control standards have been met, thus validating the article. This is particularly so in the science world where research often requires funding to access lab facilities and equipment. Alternatively, researchers could boycott publishers such as Elsevier with the aim of reducing access fees. The Cost of Knowledge, which encourages publishing in open access journals, is currently doing this and has attracted over 15,000 signatures to date. Signatories agree not to publish or perform editorial work for Elsevier’s journals. The success

Source: The IPKat: Paywalls and Robin Hoods: the tale of Elsevier and

Today in Science, members of the Facebook data science team released a provocative study about adult Facebook users in the US “who volunteer their ideological affiliation in their profile.” The study “quantified the extent to which individuals encounter comparatively more or less diverse” hard news “while interacting via Facebook’s algorithmically ranked News Feed.”*

Source: multicast » Blog Archive » The Facebook “It’s Not Our Fault” Study

Adobe has just given us a graphic demonstration of how not to handle security and privacy issues.A hacker acquaintance of mine has tipped me to a huge security and privacy violation on the part of Adobe. That anonymous acquaintance was examining Adobe’s DRm for educational purposes when they noticed that Digital Editions 4, the newest version of Adobe’s Epub app, seemed to be sending an awful lot of data to Adobe’s servers.My source told me, and I can confirm, that Adobe is tracking users in the app and uploading the data to their servers. Adobe was contacted in advance of publication, but declined to respond. Edit: Adobe responded Tuesday night.And just to be clear, I have seen this happen, and I can also tell you that Benjamin Daniel Mussler, the security researcher who found the security hole on, has also tested this at my request and saw it with his own eyes.

via Adobe is Spying on Users, Collecting Data on Their eBook Libraries – The Digital Reader.

NSA: Linux Journal is an “extremist forum” and its readers get flagged for extra surveillance


A new story published on the German site Tagesschau and followed up by BoingBoing and has uncovered some shocking details about who the NSA targets for surveillance including visitors to Linux Journal itself.

While it has been revealed before that the NSA captures just about all Internet traffic for a short time, the Tagesschau story provides new details about how the NSA’s XKEYSCORE program decides which traffic to keep indefinitely. XKEYSCORE uses specific selectors to flag traffic, and the article reveals that Web searches for Tor and Tails–software I’ve covered here in Linux Journal that helps to protect a user’s anonymity and privacy on the Internet–are among the selectors that will flag you as “extremist” and targeted for further surveillance. If you just consider how many Linux Journal readers have read our Tor and Tails coverage in the magazine, that alone would flag quite a few innocent people as extremist.

While that is troubling in itself, even more troubling to readers on this site is that has been flagged as a selector! has published the relevant XKEYSCORE source code, and if you look closely at the rule definitions, you will see* listed alongside Tails and Tor. According to an article on, the NSA considers Linux Journal an “extremist forum”. This means that merely looking for any Linux content on Linux Journal, not just content about anonymizing software or encryption, is considered suspicious and means your Internet traffic may be stored indefinitely.


via NSA: Linux Journal is an “extremist forum” and its readers get flagged for extra surveillance | Linux Journal. Are you breaking any laws?

Jotunbane: Several 🙂 Do you care? Why (not)?

Jotunbane: Sure I care. But what can I do? The laws are wrong on several different levels (the copyright monopoly have been extended 16 times in my lifetime alone, and will continue to be extended every time Mickey Mouse is getting close to the public domain). There will always be consequences when you decide to break the law and the risk of punishment is clearly part of the equation. Under US law I could get fined $150.000 for each infringement, but this is not a question of money, it’s a question of doing the right thing. Sharing is caring, so of course I care.


Interviews with E-Book-Pirates: “The book publishing industry is repeating the same mistakes of the music industry”.


The error message that launched this whole investigation.

Darrell Whitelaw / Twitter

For years now, Internet users have accepted the risk of files and content they share through various online services being subject to takedown requests based on the Digital Millennium Copyright Act (DMCA) and/or content-matching algorithms. But users have also gotten used to treating services like Dropbox as their own private, cloud-based file storage and sharing systems, facilitating direct person-to-person file transfer without having to worry.

This weekend, though, a small corner of the Internet exploded with concern that Dropbox was going too far, actually scanning users’ private and directly peer-shared files for potential copyright issues. What’s actually going on is a little more complicated than that, but it shows that sharing a file on Dropbox isn’t always the same as sharing that file directly from your hard drive over something like e-mail or instant messenger.

The whole kerfuffle started yesterday evening, when one Darrell Whitelaw tweeted a picture of an error he received when trying to share a link to a Dropbox file via IM. The Dropbox webpage warned him and his friend that "certain files in this folder can’t be shared due to a takedown request in accordance with the DMCA."

Whitelaw freely admits that the content he was sharing was a copyrighted video, but he still expressed surprise that Dropbox was apparently watching what he shared for copyright issues. "I treat [Dropbox] like my hard drive," he tweeted. "This shows it’s not private, nor mine, even though I pay for it."

In response to follow-up questions from Ars, Whitelaw said the link he sent to his friend via IM was technically a public link and theoretically could have been shared more widely than the simple IM between friends. That said, he noted that the DMCA notice appeared on the Dropbox webpage "immediately" after the link was generated, suggesting that Dropbox was automatically checking shared files somehow to see if they were copyrighted material rather than waiting for a specific DMCA takedown request.

Dropbox did confirm to Ars that it checks publicly shared file links against hashes of other files that have been previously subject to successful DMCA requests. "We sometimes receive DMCA notices to remove links on copyright grounds," the company said in a statement provided to Ars. "When we receive these, we process them according to the law and disable the identified link. We have an automated system that then prevents other users from sharing the identical material using another Dropbox link. This is done by comparing file hashes."

Dropbox added that this comparison happens when a public link to your file is created and that "we don’t look at the files in your private folders and are committed to keeping your stuff safe." The company wouldn’t comment publicly on whether the same content-matching algorithm was run on files shared directly with other Dropbox users via the service’s account-to-account sharing functions, but the wording of the statement suggests that this system only applies to publicly shared links.

We should be clear here that Dropbox hasn’t removed the file from Whitelaw’s account; they just closed off the option for him to share that file with others. In a tweeted response to Whitelaw, Dropbox Support said that "content removed under DMCA only affects share-links." Dropbox explains its copyright policy on a Help Center page that lays out the boilerplate: "you do not have the right to share files unless you own the copyright in them or have been given permission by the copyright owner to share them." The Help Center then directs users to its DMCA policy page.

Dropbox has also been making use of file hashing algorithms for a while now as a means of de-duplicating identical files stored across different users’ accounts. That means that if I try to upload an identical copy of a 20GB movie file that has already been stored in someone else’s Dropbox account, the service will simply give my account access to a version of that same file rather than allowing me to upload an identical version. This not only saves bandwidth on the user’s end but significant storage space on Dropbox’s end as well.

Some researchers have warned of security and privacy concerns based on these de-duplication efforts in the past, but the open source Dropship project attempted to bend the feature to users’ advantage. By making use of the file hashing system, Dropship effectively tried to trick Dropbox into granting access to files on Dropbox’s servers that the user didn’t actually have access to. Dropbox has taken pains to stop this kind of "fake" file sharing through its service.

In any case, it seems a similar hashing effort is in place to make it easier for Dropbox to proactively check files shared through its servers for similarity to content previously blocked by a DMCA request. In this it’s not too different from services like YouTube, which uses a robust ContentID system to automatically identify copyrighted material as soon as it’s uploaded.

In this, both Dropbox and YouTube are simply responding to the legal environment they find themselves in. The DMCA requires companies that run sharing services to take reasonable measures to make sure that re-posting of copyrighted content doesn’t occur after a legitimate DMCA notice has been issued. Whitelaw himself doesn’t blame the service for taking these proactive steps, in fact. "This isn’t a Dropbox problem," he told Ars via tweet. "They’re just following the laws laid out for them. Was just surprised to see it."

via Dropbox clarifies its policy on reviewing shared files for DMCA issues | Ars Technica.

The beauty of P2P and BitTorrent is that it’s a distributed system. Indeed, as far as sites are concerned bandwidth between users (and of course content) are both available for free and running in basic mode requires only a few dollars a month on top to pay for a server. Trading in the big gas guzzler for a something a little more frugal should be a survival option.

Of course, in many cases this could potentially mean file-sharing backing up in sophistication to 2004, to what may as well be the stone age to many of today’s younger enthusiasts. That said, ask anyone who was around at the time if it was so bad. Yes, at times Suprnova required 30 refreshes until a page actually loaded and yes, initial seeders uploaded at a snail’s pace, but the scene was buzzing and people were having fun. And if it’s not about having fun anymore, something has gone wrong along the way.

Maybe a fresh start and a resurgence of some old fashioned non-monetary gain values is what is needed. The money can’t be targeted if there isn’t any.

via Bombing BitTorrent and File-Sharing Websites Back to the Stone Age | TorrentFreak.The beauty of P2P and BitTorrent is that it’s a distributed system. Indeed, as far as sites are concerned bandwidth between users (and of course content) are both available for free and running in basic mode requires only a few dollars a month on top to pay for a server. Trading in the big gas guzzler for a something a little more frugal should be a survival option.

Of course, in many cases this could potentially mean file-sharing backing up in sophistication to 2004, to what may as well be the stone age to many of today’s younger enthusiasts. That said, ask anyone who was around at the time if it was so bad. Yes, at times Suprnova required 30 refreshes until a page actually loaded and yes, initial seeders uploaded at a snail’s pace, but the scene was buzzing and people were having fun. And if it’s not about having fun anymore, something has gone wrong along the way.

Maybe a fresh start and a resurgence of some old fashioned non-monetary gain values is what is needed. The money can’t be targeted if there isn’t any.

via Bombing BitTorrent and File-Sharing Websites Back to the Stone Age | TorrentFreak.

The United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has released an important new report that examines freedom of expression on the Internet. The report is very critical of rules such as graduated response/three strikes, arguing that such laws may violate the International Covenant on Civil and Political Rights Canada became a member in 1976. Moreover, the report expresses concerns with notice-and-takedown systems, noting that it is subject to abuse by both governments and private actors.On the issue of graduated response, the report states:he is alarmed by proposals to disconnect users from Internet access if they violate intellectual property rights. This also includes legislation based on the concept of “graduated response”, which imposes a series of penalties on copyright infringers that could lead to suspension of Internet service, such as the so-called “three strikes-law” in France and the Digital Economy Act 2010 of the United Kingdom.Beyond the national level, the Anti-Counterfeiting Trade Agreement ACTA has been proposed as a multilateral agreement to establish international standards on intellectual property rights enforcement. While the provisions to disconnect individuals from Internet access for violating the treaty have been removed from the final text of December 2010, the Special Rapporteur remains watchful about the treaty’s eventual implications for intermediary liability and the right to freedom of expression.In light of these concerns, the report argues that the Internet disconnection is a disproportionate response, violates international law and such measures should be repealed in countries that have adopted them

via Michael Geist – UN Report Says Internet Three Strikes Laws Violate International Law.

Anti-censorship campaigners compared the plan to China’s notorious system for controlling citizens’ access to blogs, news websites and social networking services.The proposal emerged an obscure meeting of the Council of the European Union’s Law Enforcement Work Party LEWP, a forum for cooperation on issues such as counter terrorism, customs and fraud.“The Presidency of the LEWP presented its intention to propose concrete measures towards creating a single secure European cyberspace,” according to brief minutes of the meeting.

via Alarm over EU ‘Great Firewall’ proposal – Telegraph.

Wikileaks represents a new type of (h)activism, which shifts the source of potential threat from a few, dangerous hackers and a larger group of mostly harmless activists — both outsiders to an organization — to those who are on the inside. For insiders trying to smuggle information out, anonymity is a necessary condition for participation. Wikileaks has demonstrated that the access to anonymity can be democratized, made simple and user friendly.

Read the rest of this entry »

LibraryGoblin sez, “HarperCollins has decided to change their agreement with e-book distributor OverDrive. They forced OverDrive, which is a main e-book distributor for libraries, to agree to terms so that HarperCollins e-books will only be licensed for checkout 26 times. Librarians have blown up over this, calling for a boycott of HarperCollins, breaking the DRM on e-books–basically doing anything to let HarperCollins and other publishers know they consider this abuse.”

I've talked to a lot of librarians about why they buy DRM books for their collections, and they generally emphasize that buying ebooks with DRM works pretty well, generates few complaints, and gets the books their patrons want on the devices their patrons use. And it's absolutely true: on the whole, DRM ebooks, like DRM movies and DRM games work pretty well.

But they fail really badly. No matter how crappy a library's relationship with a print publisher might be, the publisher couldn't force them to destroy the books in their collections after 26 checkouts. DRM is like the Ford Pinto: it's a smooth ride, right up the point at which it explodes and ruins your day.

HarperCollins has some smart and good digital people (they're my UK/Australia/South Africa publisher, and I've met a ton of them). But batshit insane crap like this is proof that it doesn't matter how many good people there are at a company that has a tool at its disposal that is as dangerous and awful as DRM: the gun on the mantelpiece in act one will always go off by act three.

And that's why libraries should just stop buying DRM media for their collections. Period. It's unsafe at any speed.

I mean it. When HarperCollins backs down and says, “Oh, no, sorry, we didn't mean it, you can have unlimited ebook checkouts,” the libraries' answers should be “Not good enough. We want DRM-free or nothing.” Stop buying DRM ebooks. Do you think that if you buy twice, or three times, or ten times as many crippled books that you'll get more negotiating leverage with which to overcome abusive crap like this? Do you think that if more of your patrons come to rely on you for ebooks for their devices, that DRM vendors won't notice that your relevance is tied to their product and tighten the screws?

You have exactly one weapon in your arsenal to keep yourself from being caught in this leg-hold trap: your collections budget. Stop buying from publishers who stick time-bombs in their ebooks. Yes, you can go to the Copyright Office every three years and ask for a temporary exemption to the DMCA to let your jailbreak your collections, but that isn't Plan B, it's Plan Z. Plan A is to stop putting dangerous, anti-patron technology into your collections in the first place.

The publisher also issued a short statement: “HarperCollins is committed to the library channel. We believe this change balances the value libraries get from our titles with the need to protect our authors and ensure a presence in public libraries and the communities they serve for years to come.”

Josh Marwell, President, Sales for HarperCollins, told LJ that the 26 circulation limit was arrived at after considering a number of factors, including the average lifespan of a print book, and wear and tear on circulating copies.

As noted in the letter, the terms will not be specific to OverDrive, and will likewise apply to “all eBook vendors or distributors offering this publisher's titles for library lending.” The new terms will not be retroactive, and will apply only to new titles. More details on the new terms are set to be announced next week.

For the record, all of my HarperCollins ebooks are also available as DRM-free Creative Commons downloads. And as bad as HarperCollins' terms are, they're still better than Macmillan's, my US/Canadian publisher, who don't allow any library circulation of their ebook titles.

via HarperCollins to libraries: we will nuke your ebooks after 26 checkouts – Boing Boing.

 | TorrentFreak


Operation Payback has been without a doubt the longest and most widespread attack on anti-piracy groups, lawyers and lobbyists. Despite the massive media coverage, little is known about the key players who coordinate the operation and DDoS attacks. A relatively small group of people, they are seemingly fuelled by anger, frustration and a strong desire to have their voices heard.

operation paybackIn the last two months, dozens of anti-piracy groups, copyright lawyers and pro-copyright outfits have been targeted by a group of Anonymous Internet ‘vigilantes’ under the flag of Operation Payback.

Initially DDoS assaults were started against the MPAA, RIAA and anti-piracy company AiPlex Software because these outfits had targeted The Pirate Bay. Those DDoS attacks were later replicated against many other targets that have spoken out against piracy or for copyright, resulting in widespread media coverage.

Even law enforcement agencies showed interest in the operation recently. Last week CNET reported that an FBI probe is underway, and TorrentFreak personally knows of at least one court case against a person that was associated with the operation.

Besides covering the results of the DDoS attacks and website hacks, very little is known about the people who are part of the operation. Who are they? What do they want, and what are their future plans? In this article we hope to solve a few pieces of the puzzle.

After numerous talks with people who are actively involved in Operation Payback, we learned that there are huge differences between the personal beliefs of members.

We can safely conclude that this Anonymous group doesn’t have a broad shared set of ideals. Instead, it is bound together by anger, frustration and the desire to be heard. Their actions are a direct response to the anti-piracy efforts of pro-copyright groups.

Aside from shared frustration, the people affiliated with the operation have something else in common. They are nearly all self-described geeks, avid file-sharers and many also have programming skills.

When Operation Payback started most players were not looking to participate in the copyright debate in a constructive way, they simply wanted to pay back the outfits that dared to target something they loved: file-sharing.

Many of the first participants who set the DDoS actions in motion either came from or were recruited on the message board 4Chan. But as the operation developed the 4Chan connection slowly disappeared. What’s left today are around a dozen members who are actively involved in planning the operation’s future, and several dozen more who help to execute the DDoS attacks.

An Anonymous spokesperson, from whose hand most of the manifestos originated, described the structure of the different groups to us.

“The core group is the #command channel on IRC. This core group does nothing more than being some sort of intermediary between the people in that IRC channel and the actual attack. Another group of people on IRC (the main channel called #operationpayback) are just there to fire on targets.”

Occasionally new people are invited to join the command to coordinate a specific attack, but a small group of people remains. The command group is also the place where new targets are picked, where future plans are discussed, and where manifestos are drafted. This self-appointed group makes most of the decisions, but often acts upon suggestions from bypassers in the main IRC channel.

Now let’s rewind a little and go back to the first attacks that started off the operation in September.

The operation’s command was ‘pleasantly’ surprised by the overwhelming media coverage and attention, but wondered where to go from there. They became the center of attention but really had no plan going forward. Eventually they decided to continue down the road that brought them there in the first place – more DDoS attacks.

What started as a retaliation against groups that wanted to take out The Pirate Bay slowly transformed into an attack against anyone involved in anti-piracy efforts. From trade groups, to lawyers, to dissenting artists. Since not all members were actively following the copyright debate, command often acted on suggestions from the public in the main IRC channel.

What followed was an avalanche of DDoS attacks that were picked up by several media outlets. This motivated the group to continue their strategy. Anonymous’ spokesperson admitted to TorrentFreak that the media attention was indeed part of what fuelled the operation to go forward. But not without some strategic mistakes.

As the operation continued more trivial targets were introduced and the group started to lose sympathy from parts of the public. While targeting the company that admittedly DDoSed The Pirate Bay could be seen as payback by some, trying to take out Government bodies such as the United States Copyright Office and UK’s Intellectual Property Office made less sense. In part, these targets were chosen by anarchistic influences in the operation.

“I fight with anonops because I believe that the current political system failed, and that a system based on anarchy is the only viable system,” one member told TorrentFreak. “I encouraged them to go after political targets just because I like Anarchy.”

The Anonymous spokesperson admitted to TorrentFreak that mistakes were made, and command also realized that something had to change. The targets were running out and the attacks weren’t gaining as much attention as they did in the beginning. It was a great way to gather attention, but not sustainable. In fact, even from within the operation not everyone was convinced that DDoS attacks were the best ‘solution’.

“I personally don’t like the concept of violence and attacking, but violence itself does raise attention,” Anonymous’ spokesperson told TorrentFreak.

“Attacking sites is one side of the story, but this operation would finally have to serve a purpose, otherwise it wouldn’t exist. We all agree that the way things [abuse of copyright] are currently done, is not the right way.”

Last week command decided to slow the DDoS attacks down and choose another strategy, mainly to regain the focus of attention. It was decided that they would make a list of demands for governments worldwide. In a move opposed to the desires of the anarchic influences, command decided to get involved in the political discussion.

Copyright/patent laws have to change, they argued, and from the bat they were willing to negotiate. They called for scrapping censorship, anti-piracy lawsuits and limiting copyright and patent terms, but not getting rid of copyright entirely. Interestingly, there is also no word in the demands about legalizing file-sharing.

To some this new and more gentle position taken by Anonymous came as a complete surprise. We asked the spokesman of the group about this confusing message and he said that there are actually several political parties that already adopt a similar position, like the Pirate parties and the Greens in Europe.

However, according to the spokesman (who wrote the latest manifesto with other members in Piratepad) they consciously chose this set of demands. “Some of us have the vision of actually getting rid of copyright/patents entirely, but we are at least trying to stay slightly realistic.”

“What we are now trying to do, is to straighten out ideals, and trying to make them both heard and accepted. Nobody would listen to us if we said piracy should be legal, but when we ask for copyright lifespan to be reduced to ‘fair’ lengths, that would sound a lot more reasonable,” the spokesman told TorrentFreak.

The demands have been published on the Operation Payback site for nearly a week, but thus far the media coverage hasn’t been as great as when they launched their first DDoS. Some have wondered whether this is the right path to continue in the first place, as it may get in the way of groups and political parties that have fought for similar ‘ideals’ for years already.

The spokesman disagreed and said that Operation Payback has “momentum” now.

So here we are nearly two months after Anonymous started Operation Payback. The initial anger and frustration seems to have been replaced by a more friendly form of activism for the time being. The group wanted to have their voice heard and they succeeded in that. However, being listened to by politicians and entertainment industry bosses might take more than that.




How hard would it be to go a week without Google? Or, to up the ante, without Facebook, Amazon, Skype, Twitter, Apple, eBay and Google? It wouldn’t be impossible, but for even a moderate Internet user, it would be a real pain. Forgoing Google and Amazon is just inconvenient; forgoing Facebook or Twitter means giving up whole categories of activity. For most of us, avoiding the Internet’s dominant firms would be a lot harder than bypassing Starbucks, Wal-Mart or other companies that dominate some corner of what was once called the real world.

Getty Images

Apple Chief Executive Steve Jobs



The Internet has long been held up as a model for what the free market is supposed to look like—competition in its purest form. So why does it look increasingly like a Monopoly board? Most of the major sectors today are controlled by one dominant company or an oligopoly. Google “owns” search; Facebook, social networking; eBay rules auctions; Apple dominates online content delivery; Amazon, retail; and so on.

There are digital Kashmirs, disputed territories that remain anyone’s game, like digital publishing. But the dominions of major firms have enjoyed surprisingly secure borders over the last five years, their core markets secure. Microsoft’s Bing, launched last year by a giant with $40 billion in cash on hand, has captured a mere 3.25% of query volume (Google retains 83%). Still, no one expects Google Buzz to seriously encroach on Facebook’s market, or, for that matter, Skype to take over from Twitter. Though the border incursions do keep dominant firms on their toes, they have largely foundered as business ventures.

Bloomberg News

Amazon Chief Executive Jeff Bezos



The rise of the app (a dedicated program that runs on a mobile device or Facebook) may seem to challenge the neat sorting of functions among a handful of firms, but even this development is part of the larger trend. To stay alive, all apps must secure a place on a monopolist’s platform, thus strengthening the monopolist’s market dominance.

Today’s Internet borders will probably change eventually, especially as new markets appear. But it’s hard to avoid the conclusion that we are living in an age of large information monopolies. Could it be that the free market on the Internet actually tends toward monopolies? Could it even be that demand, of all things, is actually winnowing the online free market—that Americans, so diverse and individualistic, actually love these monopolies?

The history of American information firms suggests that the answer to both questions is “yes.” Over the long haul, competition has been the exception, monopoly the rule. Apart from brief periods of openness created by new inventions or antitrust breakups, every medium, starting with the telegraph, has eventually proved to be a case study in monopoly. In fact, many of those firms are still around, if not quite as powerful as they once were, including AT&T, Paramount and NBC.

Internet industries develop pretty much like any other industry that depends on a network: A single firm can dominate the market if the product becomes more valuable to each user as the number of users rises. Such networks have a natural tendency to grow, and that growth leads to dominance. That was the key to Western Union’s telegraph monopoly in the 19th century and to the telephone monopoly of its successor, AT&T. The Bell lines simply reached more people than anyone else’s, so ever more customers came to depend on them in a feedback loop of expanding market share. The more customers they reached, the more impervious the firm became to challengers.

Still, in a land where at least two mega-colas and two brands of diaper can duke it out indefinitely, why are there so many single-firm information markets? The explanation would seem to lie in the famous American preference for convenience. With networks, size brings convenience.

Getty Images

Facebook CEO Mark Zuckerberg



Consider that, in the late 1990s, there were many competing search engines, like Lycos, AltaVista and Bigfoot. In the 2000s, there were many social networking sites, including Friendster. It was we, collectively, who made Google and Facebook dominant. The biggest sites were faster, better and easier to use than their competitors, and the benefits only grew as more users signed on. But all of those individually rational decisions to sign on to the same sites yielded a result that no one desires in principle—a world with fewer options.

Every time we follow the leader for ostensibly good reasons, the consequence is a narrowing of our choices. This is an important principle of information economics: Market power is rarely seized so much as it is surrendered up, and that surrender is born less of a deliberate decision than of going with the flow.

We wouldn’t fret over monopoly so much if it came with a term limit. If Facebook’s rule over social networking were somehow restricted to, say, 10 years—or better, ended the moment the firm lost its technical superiority—the very idea of monopoly might seem almost wholesome. The problem is that dominant firms are like congressional incumbents and African dictators: They rarely give up even when they are clearly past their prime. Facing decline, they do everything possible to stay in power. And that’s when the rest of us suffer.

AT&T’s near-absolute dominion over the telephone lasted from about 1914 until the 1984 breakup, all the while delaying the advent of lower prices and innovative technologies that new entrants would eventually bring. The Hollywood studios took effective control of American film in the 1930s, and even now, weakened versions of them remain in charge. Information monopolies can have very long half-lives.

Bloomberg News

Google co-founder Sergey Brin



Declining information monopolists often find a lifeline of last resort in the form of Uncle Sam. The government has conferred its blessing on monopolies in information industries with unusual frequency. Sometimes this protection has yielded reciprocal benefits, with the owner of an information network offering the state something valuable in return, like warrantless wiretaps.

Essential to NBC, CBS and ABC’s long domination of broadcasting was the government’s protection of them first from FM radio (the networks were stuck on AM) and later from the cable TV industry, which it suppressed for decades. Today, Verizon and AT&T’s dominance of wireless phone service can be credited in part to de facto assistance from the U.S., and consequently their niche is probably the safest in the entire industry. Monopolies may be a natural development, but the most enduring ones are usually state-sponsored. All the more so since no one has ever conceived a better way of scotching competitors than to make them comply with complex federal regulation.

Info-monopolies tend to be good-to-great in the short term and bad-to-terrible in the long term. For a time, firms deliver great conveniences, powerful efficiencies and dazzling innovations. That’s why a young monopoly is often linked to a medium’s golden age. Today, a single search engine has made virtually everyone’s life simpler and easier, just as a single phone network did 100 years ago. Monopolies also generate enormous profits that can be reinvested into expansion, research and even public projects: AT&T wired America and invented the transistor; Google is scanning the world’s libraries.

The downside shows up later, as the monopolist ages and the will to innovate is replaced by mere will to power. In the 1930s, AT&T took the strangely Luddite measure of suppressing its own invention of magnetic recording, for fear it would deter use of the telephone. The costs of the monopoly are mostly borne by entrepreneurs and innovators. Over the long run, the consequences afflict the public in more subtle ways, as what were once highly dynamic parts of the economy begin to stagnate.

These negative effects are why people like Theodore Roosevelt, Louis Brandeis and Thurman Arnold regarded monopoly as an evil to be destroyed by the federal courts. They took a rather literal reading of the Sherman Act, which states, “Every person who shall monopolize…shall be deemed guilty of a felony.” But today we don’t have the heart to euthanize a healthy firm like Facebook just because it’s huge and happens to know more about us than the IRS.

The Internet is still relatively young, and we remain in the golden age of these monopolists. We can also take comfort from the fact that most of the Internet’s giants profess an awareness of their awesome powers and some sense of attendant duty to the public. Perhaps if we’re vigilant, we can prolong the benign phase of their rule. But let’s not pretend that we live in anything but an age of monopolies.

—Tim Wu is a professor at Columbia Law School. His new book is “The Master Switch: The Rise and Fall of Information Empires.”


Slashdot Technology Story | Google Challenges Facebook Over User Address Books

“When you sign in to Facebook, you had the option of importing your email contacts, to ‘friend’ them all on the social network. Importing the other way — easily copying your Facebook contacts to Gmail — required jumping through considerable copy/paste hoops or third-party scripts. Google said enough is enough, and they’re no longer helping sites that don’t allow two-way contact merging. The stated intention is standing their ground to persuade other sites into allowing users to have control of where their data goes — but will this just lead to more sites putting up ‘data walls?'”

Michael F. Brown a couple of years ago asked the question: who owns native culture. The conflict he described centered around the place of native American artifacts in US public museums, and the right of Native Americans for self-determination over their representation as well as the material fate of artifacts that once belonged to them, as a cultural group.

I now want to raise a similar question: who owns digital native culture? If the local audiovisual heritage is accessible only thorugh YouTube, if local digital identities and social connections are stored by Facebook, if local tastes are better known by Amazon and than by anyone else, and if all of these critical infrastructures are beyond the reach of not  only individuals but for groups, nations as well, that what happens to the digital self-determination?

Everyone now is preoccupied with the privacy scandals of Facebook. That conflict is between a company and the individual. I would like to reframe that conflict by putting the group in focus, and ask: what happens to local cultures if the infrastructures they use to create, reproduce, maintain, archive their individual and group identities are not owned and/or controlled by them, in fact they have no say in the fate of the data, digital being is all about.

„Amit csak magunk javára tettünk, velünk pusztul el. Amit a többiek és a világ javára tettünk megmarad és halhatatlan.” Ez az Elveszett jelkép című  Dan Brown könyvből származó – meglehet némileg közhelyes- idézet az egyik legnépszerűbb, leggyakrabban bejelölt sor az internetes könyváruház  által árult e-könyvek olvasói között. A könyves világ elektronizálódása nem csak azt teszi megfigyelhetővé, hogy mik az e-könyv olvasók (csak Amerikában 3 millió van belőlük) kedvenc sorai, de olyan dolgokat is, mint például, hogy mik a leggyakrabban megvásárolt, de soha el nem olvasott könyvek, mik az éjszakába nyúlóan, a leggyorsabban/lassabban/gyakrabban olvasott könyvek, hogy csak néhány példát említsek. Az e-könyvek terjesztését ellenőrző szervezet minden különösebb erőfeszítés nélkül jut korábban elképzelhetetlen mélységű és részletességű információk birtokába arról, hogy kik, mit milyen módon, hányszor, mennyi ideig, mikre ügyelve, és mely dolgokat átugorva olvasnak.

Read the rest of this entry »


So while Zuckerberg was announcing Facebook’s ambitious plans, Dixon and some like-minded programmers were cooking up their own launch: an open-source standard for recommendations called Open Like. The idea behind the project, which is still in its embryonic stages, is that websites and services would be able to federate recommendations or “likes” by adopting a uniform standard for the data. In the same way that OAuth (which Facebook is now supporting) is an open standard for sharing user information, and OpenID is an open standard for logging into websites and services, Open Like would allow anyone who adopts the standard to make use of recommendation data.

“I feel like everyone is falling asleep while Facebook and Twitter are taking over,” Dixon said in a phone interview. “I love Facebook and Twitter — I think I’m even an investor in Twitter through some venture funds I’m a shareholder in — but I just think it’s a bad thing for the web. What if HTTP or SMTP were owned by one company?” What Facebook is trying to do with its open graph protocol might be good for Facebook, the Hunch co-founder says, but that doesn’t mean it’s good for anyone else. “They’re acting in their economic interests — there’s nothing evil about it,” he says. “But people who think that it’s some kind of move towards being open are just naive.”

Music Ally | Blog Archive

Brindley from Music Ally now (I feel like Paxman on University Challenge). He talks about where should the crackdown on piracy come – suing your own consumers hasn’t worked in markets like the US. “When you start taking action against them, that tends to lead to some pretty bad PR,” he says. And he points out that taking action against the file-sharing sites hasn’t worked well either. Yet pressuring the intermediary – ISPs – is dangerous. “When you start playing around with people’s connections… that’s a pretty severe intervention.” He thinks that actually cutting people off from accessing the internet in their own homes – “when that’s going to become just like electricity, water – a basic human right… I’m not sure it’s worth that battle

Book marketing conference was a trade conference where all the major publishers and distributors were present. I talked them about the fate of the local markets if they do not match the efforts of Google Books in terms of digitizing and making accessible of books.

Könyvmarketing konferencia kiadóknak, terjesztőknek. Igyekeztem ráijeszteni mindenkire, szoros összefüggésben azzal, amit az információs szuverenitásról gondolok.

Airstrike Video Brings Attention to WikiLeaks Site –

Three months ago, WikiLeaks, a whistleblower Web site that posts classified and sensitive documents, put out an urgent call for help on Twitter.
Enlarge This Image, via Reuters

An image from United States military video of a 2007 attack by Apache helicopters in Iraq, posted by WikiLeaks. The Web site obtained the video from an undisclosed source and decrypted it.

Video Video ( Warning: Explicit Material

From the Lens Blog
Remembering Photographer Killed in Assault
At War

Notes from Afghanistan, Pakistan, Iraq and other areas of conflict in the post-9/11 era.
Go to the Blog »

The Lede Blog: WikiLeaks Defends Release of Video Showing Killing of Journalists in Iraq (April 6, 2010)
Baghdad Bombing Streak Stokes Fear of New Round of Sectarian Violence (April 7, 2010)
For 2 Grieving Families, Video Reveals Grim Truth (April 7, 2010)
Video Shows U.S. Killing of Reuters Employees (April 6, 2010)

Enlarge This Image
Ali Abbas/European Pressphoto Agency

In July 2007, a crowd gathered at the scene of an airstrike in Baghdad that killed 12 people, including two Reuters employees.
Readers’ Comments

Share your thoughts.

* Post a Comment »
* Read All Comments (199) »

“Have encrypted videos of U.S. bomb strikes on civilians. We need super computer time,” stated the Web site, which calls itself “an intelligence agency of the people.”

Somehow — it will not say how — WikiLeaks found the necessary computer time to decrypt a graphic video, released Monday, of a United States Army assault in Baghdad in 2007 that left 12 people dead, including two employees of the news agency Reuters. The video has been viewed more than two million times on YouTube, and has been replayed hundreds of times in television news reports.

The release of the Iraq video is drawing attention to the once-fringe Web site, which aims to bring to light hidden information about governments and multinational corporations — putting secrets in plain sight and protecting the identity of those who help do so. Accordingly, the site has become a thorn in the side of authorities in the United States and abroad. With the Iraq attack video, the clearinghouse for sensitive documents is edging closer toward a form of investigative journalism and to advocacy.

“That’s arguably what spy agencies do — high-tech investigative journalism,” Julian Assange, one of the site’s founders, said in an interview on Tuesday. “It’s time that the media upgraded its capabilities along those lines.”

Mr. Assange, an Australian activist and journalist, founded the site three years ago along with a group of like-minded activists and computer experts. Since then, WikiLeaks has published documents about toxic dumping in Africa, protocols from Guantánamo Bay, e-mail messages from Sarah Palin’s personal account and 9/11 pager messages.

Today there is a core group of five full-time volunteers, according to Daniel Schmitt, a site spokesman, and there are 800 to 1,000 people whom the group can call on for expertise in areas like encryption, programming and writing news releases.

The site is not shy about its intent to shape media coverage, and Mr. Assange said he considered himself both a journalist and an advocate; should he be forced to choose one, he would choose advocate. WikiLeaks did not merely post the 38-minute video, it used the label “Collateral Murder” and said it depicted “indiscriminate” and “unprovoked” killing. (The Pentagon defended the killings and said no disciplinary action was taken at the time of the incident.)

“From my human point of view, I couldn’t believe it would be so easy to wreak that kind of havoc on the city, when they can’t see what is really going on there,” Mr. Schmitt said in an interview from Germany on Monday night.

The Web site also posted a 17-minute edited version, which proved to be much more widely viewed on YouTube than the full version. Critics contend that the shorter video was misleading because it did not make clear that the attacks took place amid clashes in the neighborhood and that one of the men was carrying a rocket-propelled grenade.

By releasing such a graphic video, which a media organization had tried in vain to get through traditional channels, WikiLeaks has inserted itself in the national discussion about the role of journalism in the digital age. Where judges and plaintiffs could once stop or delay publication with a court order, WikiLeaks exists in a digital sphere in which information becomes instantly available.

“The most significant thing about the release of the Baghdad video is that several million more people are on the same page,” with knowledge of WikiLeaks, said Lisa Lynch, an assistant professor of journalism at Concordia University in Montreal, who recently published a paper about the site. “It is amazing that outside of the conventional channels of information something like this can happen.”

Reuters had tried for two and a half years through the Freedom of Information Act to obtain the Iraq video, to no avail. WikiLeaks, as always, refuses to say how it obtained the video, and credits only “our courageous source.”

Mr. Assange said “research institutions” offered to help decrypt the Army video, but he declined to detail how they went about it. After decrypting the attack video, WikiLeaks in concert with an Icelandic television channel sent two people to Baghdad last weekend to gather information about the killings, at a cost of $50,000, the site said.

David Schlesinger, Reuters editor in chief, said Tuesday that the video was disturbing to watch “but also important to watch.” He said he hoped to meet with the Pentagon “to press the need to learn lessons from this tragedy.”

WikiLeaks publishes its material on its own site, which is housed on a few dozen servers around the globe, including places like Sweden, Belgium and the United States that the organization considers friendly to journalists and document leakers, Mr. Schmitt said.

By being everywhere, yet in no exact place, WikiLeaks is, in effect, beyond the reach of any institution or government that hopes to silence it.

Because it relies on donations, however, WikiLeaks says it has struggled to keep its servers online. It has found moral, but not financial, support from some news organizations, like The Guardian in Britain, which said in January that “If you want to read the exposés of the future, it’s time to chip in.”

On Tuesday, WikiLeaks claimed to have another encrypted video, said to show an American airstrike in Afghanistan that killed 97 civilians last year, and used the opportunity to ask for donations.

WikiLeaks has grown increasingly controversial as it has published more material. (The United States Army called it a threat to its operations in a report last month.) Many have tried to silence the site; in Britain, WikiLeaks has been used a number of times to evade injunctions on publication by courts that ruled that the material would violate the privacy of the people involved. The courts reversed themselves when they discovered how ineffectual their rulings were.

Another early attempt to shut down the site involved a United States District Court judge in California. In 2008, Judge Jeffrey S. White ordered the American version of the site shut down after it published confidential documents concerning a subsidiary of a Swiss bank. Two weeks later he reversed himself, in part recognizing that the order had little effect because the same material could be accessed on a number of other “mirror sites.”

Judge White said at the time, “We live in an age when people can do some good things and people can do some terrible things without accountability necessarily in a court of law.”