疏解北京非首都功能 2600余家商户迁至天津超级市场群
LWN.net needs you! 百度 通辽市人民检察院起诉书指控:被告人何永林作为国家工作人员,利用职务上的便利,为他人谋取利益,或利用本人职权、地位形成的便利条件通过其他国家工作人员职务上的行为,为他人谋取不正当利益,非法收受他人财物,数额特别巨大。Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing.
The lure of "federation" for internet services is potent, since it allows disparate providers to interoperate and users to choose the provider that (most) meets their needs—or to become their own provider. Many of the longtime services, such as email, web serving, DNS, and others, are federated, but many of the newest services decidedly are not. That tension is playing out right now for the Signal open-source encrypted messaging and voice application from Open Whisper Systems (OWS) and others who would like to be able to federate with it.
Signal and LibreSignal
The Signal app for Android is available under the GPLv3, but it is not fully free in the eyes of some because it relies on the Google Cloud Messaging (GCM) service that is part of Google Play Services. That means that some amount of metadata (but not the contents of the encrypted messages) traverses Google's servers. For privacy reasons, some have changed the Signal app to eliminate that dependency, which is something that its license clearly allows, but they still want those changed apps to be able to communicate with the rest of the Signal-using world. That's where the problem starts.
A service like Signal relies upon servers as intermediaries—and servers are not free. If apps like LibreSignal—a fork of the Signal app that removes the GCM dependency—want to communicate with other Signal users, they must either use the same servers or run their own servers that are federated with those run by OWS. But that is not to be.
In a thread on the LibreSignal issue tracker, OWS developer Moxie Marlinspike stated that OWS did not want LibreSignal to use its servers (nor to use "Signal" as part of its name):
If you think running servers is difficult and expensive (you're right), ask yourself why you feel entitled for us to run them for your product.
One of the LibreSignal developers, Michel Le Bihan (posting as mimi89999),
said that the project was willing to change the name, but wondered:
"If I finance running a TextSecure server for LibreSignal, will you
federate with us?
" Marlinspike deemed
that improbable: "It is unlikely that we will ever federate with any servers outside of our control again, it makes changes really difficult.
"
A few years back, the encrypted-messaging piece of Signal, TextSecure, was federated with the CyanogenMod servers so that users of that platform could send messages to TextSecure users on other platforms. That federation is no longer happening; Marlinspike explained the problems that federating caused:
Marlinspike expanded his thoughts on federated protocols in a blog post. The crux of the problem with federated protocols is that the entire ecosystem is moving too fast for services to support them. It restricts what the service provider can change because the existing features still need to be supported:
I thought about it. We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. To answer his question, that's how far the internet got. It got to the late 90s.
That has taken us pretty far, but it's undeniable that once you federate your protocol, it becomes very difficult to make changes. And right now, at the application level, things that stand still don't fare very well in a world where the ecosystem is moving.
Indeed, cannibalizing a federated application-layer protocol into a centralized service is almost a sure recipe for a successful consumer product today. It's what Slack did with IRC, what Facebook did with email, and what WhatsApp has done with XMPP. In each case, the federated service is stuck in time, while the centralized service is able to iterate into the modern world and beyond.
So while it's nice that I'm able to host my own email, that's also the reason why my email isn't end to end encrypted, and probably never will be. By contrast, WhatsApp was able to introduce end to end encryption to over a billion users with a single software update.
But he also recognized some of the downsides to his conclusions.
Federation allows users to choose who has access to their metadata, but it
generally already a lost cause because there are typically just a few
providers—or even a single provider (e.g. Gmail)—that provide most users
with the service. Though he believes it is impossible to have a new federated
service on today's internet, he is not entirely happy with that outcome:
"it's something that I'd love to be proven wrong about
".
XMPP?
Various posters in the issue-tracker thread pointed to Extensible Messaging and Presence Protocol
(XMPP) as a potential solution, along with various projects (such as Conversations and ChatSecure) that use XMPP. Marlinspike
is not
particularly hopeful that XMPP-based solutions will lead to a
successful messaging network, as Signal's non-federated approach has done.
He noted that the Guardian
project has been working on the problem as long as OWS has, "so
why are Signal's growth, ratings, and engagement substantially
higher?
"
In his blog post, he gets more specific about the shortcomings of XMPP that he sees. While it is an extensible protocol, that extensibility leads to problems of its own:
One of the most user-friendly choices that Signal made was to use the phone
numbers already stored in the contacts list as the identifiers for sending
messages—exactly like regular SMS text messages. It is "not possible
to build an identity this simple in a federated landscape
",
Marlinspike said.
The focus for Signal is on making it usable for ordinary users:
We want to produce technology that is privacy preserving but feels just like everything else people already use, not somehow convince everyone to fundamentally change their workflow and their expectations.
But the fact remains that Signal uses Google services and various people (including some that are precisely the target audience for encrypted messaging) do not trust Google or, perhaps, worry about what the company might be compelled to do by various governments. Those who want to communicate with other Signal users but not with Google servers are shut out. In a post on his blog, Matthew Garrett laments the situation:
Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?
Comments on that post point to various options that may more or less fill those needs, but the lack of a network effect for projects that were listed, such as Matrix, make them a hard sell in the "convenience" department. But "walled gardens" are just that—federation is one way out of that particular trap. For companies that are trying to build their business, though, walled gardens have some obvious appeal, which may also be playing into the plans of OWS.
Centralization
Part of the reason that Marlinspike's thoughts are striking a chord in some circles (this Hacker News thread for example) is because of his reputation in fields like cryptography and security, as well as for having been instrumental in building the popular Signal service and apps. Much of the internet is built on federated technology, which has led to a lot of innovation and important progress along the way. It is concerning to many (perhaps including Marlinspike) that centralized services may be the way forward.
It is a difficult problem. Free, secure, and convenient solutions in the messaging space have not (yet?) come about. Even non-realtime encrypted communication via email is inconvenient, at best, and effectively unusable by those who are not tech savvy. Trusting some centralized service to handle all that may provide convenience, but there are always going to be concerns about the trustworthiness of that provider (and the code it runs). This is not really a new problem for the free-software world, but Marlinspike's thoughts have brought it into sharp focus. Difficult or no, it is a problem worth solving.
Index entries for this article | |
---|---|
Security | Anonymity |
Security | Encryption |
Posted May 19, 2016 3:29 UTC (Thu)
by josh (subscriber, #17465)
[Link] (25 responses)
Perhaps the vendor of a successful centralized technology just can't imagine anyone else doing what they do and keeping up with them, or doesn't want anyone else to start imagining that. If you look at other developers as those who help drive you forward and vice versa, rather than looking down on other developers as as those who hold you back, then openness, collaboration, and federation start making a lot more sense.
Posted May 19, 2016 5:34 UTC (Thu)
by wahern (subscriber, #37304)
[Link] (17 responses)
A new federated framework could pick up steam as long as there was one or a few primary implementations that the vast majority of people used. Those implementations don't have to become bogged down in interoperability as long as there was some kind of tacit understanding by users that interoperability problems were the fault of marginal implementations dragging their feet, as is the case with browsers. An end-user might not have any understanding of the technology, but the vast majority still understand the concept of switching client software. However slow, that's a significantly faster process in reality than administrators upgrading backend software, despite the fact there are far fewer backend systems.
I think the underlying mechanism here relates back to the end-to-end principle. Put as much of the logic as possible at the end nodes, keeping the transport layer as simple as possible. That was one of the flaws of XMPP, IMO. Too much logic exists in the server-side code[1]. But server-side software is much less responsive to user complaints. Corporate and ISP IT departments aren't especially well-known for quickly upgrading servers to support fancy new users features, whereas users will naturally migrate to client software providing the better experience.
OTOH, designing protocols and architecting software which minimize dependencies on intermediate nodes is very difficult. It's just too easy to put logic in the middle, especially when you're on a time crunch. And if you're furthering proprietary interests, well then it's a no-brainer.
[1] For example, I never understood why anybody ever thought it was a good idea to use out-of-band channels for XMPP file transfer and voice, or to use in-band channels which relied on server support. I understand performance concerns, but it was destined for failure rather than failure only being a possibility. Those decisions created dependencies that required a substantial number of server systems upgrading, and upgrading responsively in step with user preferences. The only way intermediate nodes get upgraded like that is when they're centrally controlled.
When XEPs emerged which were more server agnostic, there was no predominate client side software which carried the day. Google's Jingle failed, I would argue, because support was never added to libpurple. If Google was serious about it, they would have added it to libpurple, or forked it and took the lead in pushing features to XMPP clients. Though, my knowledge of the history of this is limited so I'm probably missing critical details. My analysis may be factually wrong, but I think my point is valid.
Posted May 19, 2016 6:16 UTC (Thu)
by smcv (subscriber, #53363)
[Link]
The reason Jingle does that is precisely to route around the servers: any server that doesn't break the most basic level of extensibility (passes unknown client-to-client messages through unaltered) does not need changes to support Jingle. Google designed it like this for the reasons you describe: whenever XEPs required special server-side support, most servers didn't implement that in practice, leaving those XEPs unavailable even in clients that theoretically supported them.
Unfortunately, Facebook's and MSN's "XMPP" bridges didn't have even that level of extensibility: they dropped messages they didn't understand, even if by design it didn't need to understand them (because they were bridging into an internal protocol that didn't have a corresponding concept). As a result, "works on any server" became "works on any server except Facebook's and MSN's".
Posted May 19, 2016 11:58 UTC (Thu)
by khim (subscriber, #9252)
[Link] (15 responses)
If you compare it to non-existent development of SMTP or decades-long process of switching to IPv6 then yes, sure. If you compare it to other competing, non-federated technologies… then it's slow as snail. What have we gotten on the web lately? WebGL, WebRTC, SPDY/HTTP/2… and now there are “exciting” new features: low-level bytecode, GPU computation with WebCL (or maybe with compute shaders?)… they were discussed years ago and are still not usable on the web. Compare that development to the development of non-federated platforms: when old API is no longer suitable it's replaced quickly (both Metal and Vulkan were introduced and implemented in a span of about one year), and political discussions don't bogge down development (think NaCl/asm.js/WebAssembly fiasco: Android and even laggard Windows Phone 8 have gotten a way to run fast, native code in a couple of years—while “quickly evolving” world of web browsers ws left behind… Sorry, but development of web browsers shows just how right Marlinspike is: federated worlds could exist—but only if non-federated alternative is unusable. Internet and web have won not because they were federated, but because they were large: they just had more users than AOL or CompuServe. When you need federated protocols and clients/servers to reach out to billions of users then such protocols win by default. When you have an ability to develop non-federated solution… well, it's not even a contest. In hardware world federated solutions often win because they spread development cost (if one company develops non-federated solution and dozen or hundreds of companies develop a federated one then sheer money power often prevail) but in a world of software this factor does not work: companies could just pool their resources and develop one single solution instead.
Posted May 19, 2016 12:09 UTC (Thu)
by pizza (subscriber, #46)
[Link] (1 responses)
They didn't start out large. The question you should be asking is how/why they became large, given their early disadvantages.
And that answer is ... federation.
Posted May 19, 2016 18:11 UTC (Thu)
by khim (subscriber, #9252)
[Link]
That's right answer to the wrong question. Of course federation makes it possible to build large system and, indeed, when large system couldn't be monolithic they become federated. Even today there are many systems which are federated: not just ISPs, but cellular networks, railroads, airlines and many other systems are federated today! Heck, you could even find popular federated systems developed in XXI century (here is one, e.g.). But they all share one important quality: they have some kind of “ceiling”. Some reason which limits growth of unfederated alternative. It may be technical reason (AOL/Compuserve growth hit the limit when it reached US borders: it was impossible to provide cheap enough access to people in Asia or Europe because intercontinental phone calls were incredibly expensive), it may be non-technical reason (RIAA and MPAA made sure that there would be no huge torrent sites with millions of user thus we've naturally gotten DHT), but if there are no “ceiling” then there are no reason for the federation. It's more cumbersome and thus less attractive solution, it's only chosen by users out of necessity, not out of desire.
Posted May 20, 2016 8:28 UTC (Fri)
by niner (subscriber, #26151)
[Link] (7 responses)
Posted May 22, 2016 16:54 UTC (Sun)
by jospoortvliet (guest, #33164)
[Link] (6 responses)
Posted May 22, 2016 17:35 UTC (Sun)
by flussence (guest, #85566)
[Link] (5 responses)
Posted May 22, 2016 19:34 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
Posted May 23, 2016 14:16 UTC (Mon)
by khim (subscriber, #9252)
[Link] (3 responses)
Define “usable”, please. All the HPC is built today around GPGPU and similar architectures (things like Xeon Phi, it's used on your mobile phone (to process photos in real-time and do other compute-intensive things) and so on. The fact that Linux distros (and desktop in general) are no longer in the center of this development is unfortunate (especially since lots of development is based on Linux), but it does not mean that all that development have just stopped and disappeared. Vulkan works today (although not that many apps use it), Metal is used by real apps, too (look here - there are links to appstore where they could be found). Sure, you couldn't use it on your device if your insist on 100% OS but most users out there don't care and use it just fine. And while WebGL is usable today don't forget that we are talking about technology which is almost two decades old (DirectX was released in 1996 and OpenGL is even older then that). Sure, you could counter that other unfederated APIs (things like Glade, e.g.) have died—but there are emulators and people still play these games. My point was that most platforms had 3D by the end of XX century while Web needed another decade before it got it and, ironically enough, exactly when web finally, finally, arrived on that decade-old platform the rest of the world already moved and shifted to significantly different 3D API! When would web have things like Metal, Renderscript or Metal? My guess is: most likely answer is “never”—and if Android will, indeed, arrive on desktop even WebGL and WebRTC could become unavailable eventually (since people would use native apps instead of webapps for things like videocalls or maps)—although that is not guaranteed, freeze a-la SMTP is more likely.
Posted May 23, 2016 15:52 UTC (Mon)
by pizza (subscriber, #46)
[Link]
I think it's fair to say that GPGPU is only now becoming "usable" without relying on (highly) proprietary software stacks.
Posted May 24, 2016 22:43 UTC (Tue)
by flussence (guest, #85566)
[Link] (1 responses)
> things like Xeon Phi, it's used on your mobile phone
Posted May 25, 2016 1:24 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link]
Indeed, a mobile phone with a 300W Xeon Phi processor would last about five seconds before either draining the battery or setting itself on fire, whichever comes first. Possibly both.
There should have been a closing parenthesis after "Phi". That was meant to be read as "GPGPU is used on your mobile phone".
Posted May 20, 2016 8:36 UTC (Fri)
by paulj (subscriber, #341)
[Link] (3 responses)
SMTP hasn't developed much because it's very mature, and basically does what's needed. The maturity of SMTP hasn't stopped development at higher layers above SMTP. Also, if you want to blame SMTP for identity and abuse issues, no one has solved those any better in any other protocol that couldn't also be applied to SMTP. SMTP is actually wildly successful, because it is "federated", distributed and decentralised.
Posted May 29, 2016 23:47 UTC (Sun)
by HelloWorld (guest, #56129)
[Link] (2 responses)
Posted May 30, 2016 1:17 UTC (Mon)
by Fowl (subscriber, #65667)
[Link]
Posted Feb 6, 2019 9:18 UTC (Wed)
by jond (subscriber, #37669)
[Link]
Posted May 24, 2016 7:35 UTC (Tue)
by micka (subscriber, #38720)
[Link]
Posted May 19, 2016 6:06 UTC (Thu)
by roc (subscriber, #30627)
[Link] (3 responses)
Marlinspike carefully phrased his mention of HTTP to dance around the fact that HTTP/2 is being deployed right now (and experimental predecessors have been deployed for years). That section of his post is deliberately misleading.
He's right that decentralized evolution imposes costs, including delays. But he's not right that centralized always wins.
Posted May 19, 2016 11:58 UTC (Thu)
by khim (subscriber, #9252)
[Link] (2 responses)
It's a bit dishonest, but not by much. By your own admission: HTTP/2 is being deployed right now and experimental predecessors have been deployed for years. Basically it's shows that federated world could be moved along — if you are willing to spend about 10x more resources and accept about ? of development speed.
Posted May 19, 2016 12:47 UTC (Thu)
by hkario (subscriber, #94864)
[Link] (1 responses)
did we learn nothing?
Posted May 19, 2016 17:11 UTC (Thu)
by khim (subscriber, #9252)
[Link]
Sure. The lesson is obvious: no matter how dominant is your platform if you stay dormant for years sooner or later someone will bypass you. The web which we enjoy today is result of Microsoft's attempt to rebuild it today: architecture astronauts have won and instead of quickly adding features to MS IE which would make a breakout attempt impossible Microsoft decided to rebuild everything from scratch The end result was something years later, with reduced functionality and insane resource consumption. This gave chance to the Firefox/Safari/Chrome—but also gave developers of these monsters a false sense of security: they decided that since Microsoft was stupid all other contenders for the “try before you buy” app deployment platform will be just as stupid. The height of folly is, of course, stillborn Firefox OS but I think that the ball was lost when Mozilla decided that it could afford to dictate the rules to app developers: “it's my way or the highway”… most developers have chosen the highway… well some have picked some other highway, but almost everyone left anyway… Some still believe that they will return, but I seriously doubt it: Apple and Google are not like Microsoft (at least not yet), they iterate fast and already made web development mostly irrelevant. I fully expect to see regression of web platform in the next few years—it'll be interesting to see how this process will look like.
Posted May 19, 2016 9:05 UTC (Thu)
by mjthayer (guest, #39183)
[Link] (1 responses)
Actually, why not? Security and secrecy would be acceptable if both parties wanted it, and you would have the convenience of having one app for all communications. Not much lost: if your partner does not value secrecy, the best protocol in the world will not stop them republishing your message over a different medium.
Posted May 19, 2016 9:19 UTC (Thu)
by josh (subscriber, #17465)
[Link]
The web is client-to-many-servers, not just client-to-one-server. And with WebRTC, the web also supports peer-to-peer.
Posted May 19, 2016 12:08 UTC (Thu)
by smoogen (subscriber, #97)
[Link]
I expect that once Google does a spring cleaning, figures out a way to charge for certain features that make using their closed garden useful, or it turns out the metadata being shared was a useful sidechannel for the real communication.. then there will be a push for a federated protocols. By that point hopefully the things that people know will be useful or not are known.
Posted May 19, 2016 7:45 UTC (Thu)
by petur (guest, #73362)
[Link] (6 responses)
The only thing it brings, is user/customer lockin.
What progress has WhatsApp brought? It pinned your account to your phone so that, unlike other chat protocols, I can officially only chat on my phone and not on my tablet.
Lack of federation just demonstrates the inability to create or use a protocol that can work with the future and the past.
Posted May 19, 2016 9:56 UTC (Thu)
by federico3 (guest, #101963)
[Link]
I'm glad finally somebody said it.
> Lack of federation just demonstrates the inability to create or use a protocol that can work with the future and the past.
"unwillingness" more than "inability", I'd say.
Posted May 19, 2016 16:36 UTC (Thu)
by smurf (subscriber, #17840)
[Link] (1 responses)
You can move WhatsApp from one phone to another. No problem. Just root the thing and backup+restore the data with TitaniumBackup.
My belly-ache with the whole fragmented-messager fiasco is twofold. I can't know beforehand which messager a peer might be using: I need to feed my phone numbers to all of them. Plus, *something* must eat all the RAM and CPU on my phone … why not install 20 additional messengers, learn the idiosyncracies of their UI, deal with crashes, deal with broken sync …
The second problem is most of these messengers don't interoperate and don't have any sort of API. I want my computer to text me? Right: install another specialized messenger. I want to create a chat group between >2 people? Right: tell all of them to install a common chat tool.
Posted May 20, 2016 18:34 UTC (Fri)
by eternaleye (guest, #67051)
[Link]
Lock-in isn't "can't switch devices" unless the provider locking you in is the device provider.
It's "can't switch away from the provider's product" - which results in the 20 additional messengers, the lack of interoperation, etc.
Posted May 19, 2016 18:18 UTC (Thu)
by Seegras (guest, #20463)
[Link] (2 responses)
Yeah, either we've got a completely fragmented market, where every 2 weeks a new state-of-the-art messenger pops up. Like now. WhatsApp, Signal, I don't even know the IM of this week.
Or one of these manages to take off. And then we've got a lock-in. After that, it will get stale and hinder any newer development.
Posted May 20, 2016 2:12 UTC (Fri)
by krakensden (subscriber, #72039)
[Link] (1 responses)
Posted May 20, 2016 18:32 UTC (Fri)
by yroyon (guest, #99220)
[Link]
Slack at work, Couple with the spouse, etc, etc. So many. Each has a niche.
Posted May 19, 2016 8:53 UTC (Thu)
by Jandar (subscriber, #85683)
[Link]
As noted Signal leaks metadata. This leakage is an insecurity (the USA administration admitted to kill based on metadata) so Signal is convenient and at most somewhat secure.
Posted May 19, 2016 13:08 UTC (Thu)
by javispedro (guest, #83660)
[Link] (3 responses)
E.g. end to end decryption does not really require changes on the protocol (on the contrary; if you need to change the protocol it most probably means you're leaking information to the server).
Posted May 19, 2016 13:41 UTC (Thu)
by pizza (subscriber, #46)
[Link] (2 responses)
The funny thing about XMPP interoperability is that, it was the big players (most notably Google) that were the worst offenders -- For example, Google Talk was subtly incompatible with the Jingle spec and reference implementation that Google itself authored.
Posted May 26, 2016 3:33 UTC (Thu)
by Garak (guest, #99377)
[Link] (1 responses)
Posted May 26, 2016 11:13 UTC (Thu)
by pizza (subscriber, #46)
[Link]
I don't see this at all. There is real value in a third party providing services for folks who can't be bothered to do it themselves -- and I say this as someone who chooses to run his own.
The home-server-persecution bit is largely ISP driven because it breaks their asymmetric download-heavyish models they've based their pricing on. That, and the sad fact that most home systems' "servers" are really just spam bots and sources of various forms of malware.
Posted May 19, 2016 15:44 UTC (Thu)
by ortalo (guest, #4654)
[Link]
Maybe we just need both designs in the right places (one where we need durability and the other where we need sophistication). Looks somehow like wheels to me in some sense... ;-)
Posted May 19, 2016 17:08 UTC (Thu)
by Creideiki (subscriber, #38747)
[Link] (3 responses)
I respectfully disagree. This stubborn insistence that one person = one phone number = one phone brings me a lot of pain. Most of the people I want to talk to via Signal have two phones and two numbers - one personal, one for work. I go one step further and use a dual-SIM phone. Trying to control, or even ascertain, which identity is used at each endpoint of a conversation is an exercise in hair-pulling frustration.
Posted May 20, 2016 16:49 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 12, 2018 0:28 UTC (Fri)
by tonyblackwell (guest, #43641)
[Link] (1 responses)
Posted Oct 12, 2018 5:52 UTC (Fri)
by zdzichu (subscriber, #17118)
[Link]
Posted May 19, 2016 21:53 UTC (Thu)
by flussence (guest, #85566)
[Link]
Maybe because the silent majority of XMPP servers and clients don't phone home to one central authority? The rest of his statements are of a similarly dismissable begging-the-question variety — Signal is just another Yahoo Messenger in the grand scheme of things.
Posted May 20, 2016 16:36 UTC (Fri)
by aemerson (guest, #104509)
[Link] (6 responses)
Bringing IPv6 up is just a red herring. Non-Federated Internet Protocol would run the network in some circle of Hell.
Open, multiple-implementor protocols get developed quickly enough, it's the adoption of new work that can be slow. As usual, Network Effects Ruin Everything. Some things, like end-to-end encryption, really can't and shouldn't be negotiated. A flag day really IS the best way to do that.
And federated protocols CAN do that. Developers can remove support for old, insecure fallbacks. (Like SSL libraries have done.) providers can declare flag days and say they'll only federate with the new version of the open, interoperable protocol that has much better security properties so everyone knows when the deadline is coming.
(The claim that federated protocols 'rally around' large providers actually vitiates the rest of Mr. Marlinspike's argument. If there are a small number of large providers they can coordinate. Or a single provider can take the initiative and refuse to support older versions, making the others support them. All the disadavantages of ederation disappear. I'm not necessarily convinced 'large providers' are unavoidable, but if Mr. Marlinspike is, he should throw the rest of his anti-federation views in the garbage.)
Centralized protocols of course have their own problems. You may have noticed, but there isn't much in the way of running Signal on something other than a smart phone (There is an application built into Google Chrome but by all accounts it isn't very good.), or if you have no phone at all. This kind of sucks.
If someone wrote a GTK+ client for Signal or a client to run in a terminal, would Mr. Marlinspike say he doesn't want THEIR project connecting to HIS servers? And he's already ruled out federation? That sucks too, doesn't it?
If I want to make Signal the input or output to a complex pipeline, say filtering piles of messages into interesting places, using some to update displays on my desktop and queueing up some for later examination, extracting information from others, populating an Org mode document with times I plan to meet people, can I? No? Not unless I convince Mr. Marlinspike and the other Signal developers to implement it? (Sure, I could implement it myself, but they do not want Other Projects aon Their Servers and they won't talk to any other servers.) That kind of sucks, too.
Stuart Cheshire turned DNS into an awesome zero-configuration/service discovery (yes DNS-SD can and often does run over regualr DNS not just mDNS) protocol without having to convince the Centralized DNS AUthority to let him. Avahi reimplemented Cheshire's protocol and talks to the original and other implementations. It works wonderfully. They didn't have to talk anyone into it or get told that nobody would interoperate with them. (And people store all kinds of other things in the DNS, too, more variety every year.)
Google sends and receives email from anyone, and they do all kinds of things with email that RFC 822 never mentions (making your flight itinerary automatically pop up on your phone?) and they didn't have to convince the Central Email Authority to implement it for them.
The Suckless people shatter IRC into a whole pile of FIFOs that you can run through the Unix meat grinder however you like. They didn't have to ask the IRC authority.
And where are the awesome centralized protocols of the 2000s? Surely with advantages like these and a popular Internet they must have evolved quickly, leaving everything else in the dust, making ICQ and MySpace the dominant platforms in the market today....?
Thank you, no. If someone starts a project to fork Signal and create a federated version that isn't so wedded to the phone (heck, if they make an open version that let's me use something other than a phone number as an ID, I'll write the desktop client), I will happily donate to a kickstarter or send paypal or anything else to hurry it along.
I'll leave centralized network paradigms to do what they have done throughout history: suck and die.
Posted May 20, 2016 18:03 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
So if in your book a decade to make a change is quick, then I don't want to know what is "slow".
> If I want to make Signal the input or output to a complex pipeline, say filtering piles of messages into interesting places, using some to update displays on my desktop and queueing up some for later examination, extracting information from others, populating an Org mode document with times I plan to meet people, can I? No?
That might be OK for a homegrown project that nobody cares about, but if you try that with actual real-life users who depend on your tools...
Posted May 26, 2016 5:58 UTC (Thu)
by madhatter (subscriber, #4665)
[Link] (4 responses)
# rcsdiff -r1.45 /etc/httpd/conf/httpd.conf
I'd have the check my daybook to be sure, but my memory is that it took me less than a decade to type and commit the above change.
Posted May 26, 2016 6:02 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted May 26, 2016 6:35 UTC (Thu)
by madhatter (subscriber, #4665)
[Link]
Posted May 26, 2016 8:18 UTC (Thu)
by chojrak11 (guest, #52056)
[Link] (1 responses)
Posted May 26, 2016 8:24 UTC (Thu)
by madhatter (subscriber, #4665)
[Link]
Posted May 22, 2016 23:24 UTC (Sun)
by tincho (guest, #74251)
[Link]
Re-posting from a comment I left on mjg59's blog. Two years ago, after a day of talks at FOSDEM, some friends and I had a conversation about similar topics. It was triggered by the then new kid on the block: Telegram. I wrote a blog post about that shortly after. In the next few months I kept thinking about the problem of having a user-friendly, federated, secure system for RTC. Even though it went unnoticed and I did not do any real work on it, I wrote a series of posts discussing ways in which this could be done, here. Maybe this interests somebody who has the time and resources to help make it a reality.
Posted May 24, 2016 15:44 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link]
On the other hand, I do strongly agree that the extensibility of XMPP basically killed off Jabber. I’m back to just IRC except for a (non-federated) work-internal server, and occasionally (say twice a month) firing up a Jabber client to get a message through (though I mostly eMail those people instead).
Posted May 30, 2016 1:18 UTC (Mon)
by PaulWay (guest, #45600)
[Link]
I agree with other analysis here: the examples Moxie uses for "federated" protocols failing are not only inaccurate, they're just wrong. And most of his examples are examples of people making simple assumptions that later turned out to be wrong. IPv4 has failed because it assumed that there would never be more than 2^31 (or so) hosts on the internet. HTTP 1.0 failed because it assumed that each session would make one request and terminate and that session setup (TCP and SSL) would be cheap. Etc. etc. That was partially addressed in HTTP 1.1, but there are lots of things that HTTP 2.0 does that simply can't fit into HTTP 1.1 and require a new negotiation system. Fundamentally all of these are problems with backward and forward compatibility.
So to me part of Signal's problem with using federation is caused simply by them not thinking in the long term about what they would need and making sure that things would be forward and backward compatible. You don't have to worry about compatibility if you just force your client and your server to always use matching protocols, but it means that you and no-one else can be compatible with you. And that's exactly what Moxie argues for.
Have fun,
Paul
Posted Feb 7, 2017 1:58 UTC (Tue)
by CBiX (guest, #113959)
[Link]
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
However stunted and slow the evolution of HTTP has been, the browser environments have evolved quickly.
The perils of federated protocols
The perils of federated protocols
They didn't start out large. The question you should be asking is how/why they became large, given their early disadvantages.
And that answer is ... federation.
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
Ten years later, still no sign of it ever becoming usable.
One day Vulkan might work, but I'm not holding my breath for it.
The perils of federated protocols
The perils of federated protocols
In the way VDPAU is today. Able to do more than run the demo/test code shipped with Mesa. Reducing power consumption by offloading work to an appropriate device instead of increasing it by being dead weight to compile.
I don't think my phone has a 300W processor (it seems to cope with image/photo editing fine regardless). Did you mean to say Someone Else's Computers? Those kind of services are best enjoyed as schadenfreude.
The perils of federated protocols
> I don't think my phone has a 300W processor
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
did we learn nothing?
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
And mistakes from the past don't mean you can't come up with a protocol that also describes how to handle the future.
Ditto for Signal: I can only use it on my phone. How dare you call that progress? I'd say exactly THAT is back to the 90's
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
Long Live The Prosperous Federation, So Say We All!!!
Long Live The Prosperous Federation, So Say We All!!!
The perils of federated protocols
Conversely, this may be viewed as a natural disadvantage with respect to the sophisticated centralized system that offer all the bells and whistles at the risk of fast collapse.
Phone numbers as identifiers
One of the most user-friendly choices that Signal made was to use the phone numbers already stored in the contacts list as the identifiers for sending messages—exactly like regular SMS text messages.
Phone numbers as identifiers
Phone numbers as identifiers
Phone numbers as identifiers
One of the worst, non-discoverable interfaces. :(
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
No, they can not. Clients _can_ remove support for _some_ bad fallbacks, after years of gradual deployment. Servers are usually stuck pretty much for a decade (e.g.: SSLv3 deprecation).
You can, nobody stop you personally. However, you should be prepared for your scripts to break at any moment if Signal makes an incompatible change.
The perils of federated protocols
> much for a decade (e.g.: SSLv3 deprecation).
1113a1114
> SSLProtocol All -SSLv2 -SSLv3
The perils of federated protocols
Duh. You probably don't have an expensive middlebox in front of your server doing load balancing.
The perils of federated protocols
The perils of federated protocols
Said last person in the world still using RCS routinely...
We could certainly have a discussion about the pros and cons of localised-lightweight vs centralised-heavyweight source control, and it might even be interesting, but here is probably not the right place to do it. If you think that my choice of source-control applications has any bearing on my underlying argument, please feel free to argue your case. Do, however, bear in mind Pirsig's dictum that "the world's biggest fool can say the sun is shining, but that doesn't make it dark out".
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
The perils of federated protocols
Chicken or egg