Interoperability or Death? (session documentation)

Imported back from the hackpad: https://lote5.hackpad.com/FRI-1100-1230-Interoperability-or-Death-Talk-by-Meredith-Patterson-npSydPvJjk4 Thanks to all note takers.
 
FRI 11:00 - 12:30 |  Talk by Meredith Patterson
 
This session was about Protocols for Human to Human interaction.  The Meatspace equivalent to conflicts in internet history is process lock-in and lack of common language.
 
Meredith suggests that we bake in failure-handling:
 
  • Figure out error codes and named exceptions: What kinds of errors would you be dealing with?
  • Hardware exceptions: Are they resumable or non-resumable (recoverable or non recoverable- error from which we can recover?)
  • Exception handling: unwinding stacks (first in last out). Rewind back to just before things exploded and then play them really slowly to understand how things went wrong
  • To create conditions for manageable failures, use progressive enhancement: start simple and then build on that
  • Have a way to determine and close failed projects/ dead projects.
  • Understand that collapse is also a form of self-organized criticality (not only when iniialives take off) 
  • Learn to identify Anti-patterns: THE SOLUTION THAT LOOKS OBVIOUSLY RIGHT BUT IS ACTUALLY WRONG.
 
Pragmatics in linguistic research.
 
Lessons learned from 40 years of software development
- Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations
-Organisations act to defend their incentives. When people are going to fight over protecting their incentives, the turf they will be doing it on,  landscape of contention, will be interoperability.
- Reference: Conway’s law
 
NIST  (standard body in US that certifies cryptographic technology): They have a very formal and rigorous process. Turned out DUAL PC RBG was backdoored and NSA manipulated NIST into certifying it.
 
Lots of ways to fail (in Engineering)
 
  • Open: Means you can still pass things through when control fails. E,g, Valve gets stuck open and all water flows through it
  • Closed: When control fails, all traffic is blocked
  • Safe: what causes minimal harm, is a question that can only be answered if  you know what in the system can cause harm
 
1. Interoperability. 
Video of Eddie Izzard on the church of england.
If asked to choose between cake or death? Clearly you would pick death.
But when it comes to getting people interact/cooperate it sometimes seems like people would pick death, rather than learn to interoperate.
 
sandstorm.io cloud based collaboration tool with good fine grained security
 

1. Interoperable, adj. Capable of being used or operated reciprocally.

2. Reciprocity?

When two or more people understand what the other one is capable of doing, what they have the time and resources to do= legibility.  e.g. web services 

Reliability is a big concern.
 
3. How about in technologies? - Interfaces that communicate above in a clear and structured way (APIs. usage (), WYSIWG, language references)
- Robustness: the absence of unmitigable surprise. It’s a field of study.
- Explicit is better than implicit e.g “The Zen of Python”
 
The Zen of Python, by Tim Peters 
 
Beautiful is better than ugly. 
Explicit is better than implicit. 
Simple is better than complex. 
Complex is better than complicated. 
Flat is better than nested. 
Sparse is better than dense. 
Readability counts. 
Special cases aren't special enough to break the rules. 
Although practicality beats purity. 
Errors should never pass silently. 
Unless explicitly silenced. 
In the face of ambiguity, refuse the temptation to guess. 
There should be one-- and preferably only one --obvious way to do it. 
Although that way may not be obvious at first unless you're Dutch. 
Now is better than never. 
Although never is often better than *right* now. 
If the implementation is hard to explain, it's a bad idea. 
If the implementation is easy to explain, it may be a good idea. 
Namespaces are one honking great idea -- let's do more of those! 
 
4. So how did we get here? A deep dive into the history of internet (tech-related) conflicts and how they were resolved...
 

1970’s Protocol interoperability : formally defining machine protocols was very useful in the Bell labs environment.

 
protocol is an agreement between machines about how to interact
 
- How to solve conflict? Editor wars (vi/emacs) (still ongoing)

- Parallels to protocol interoperability in meatspace is process. May be explicit or implicit. People have to agree on how they’re going to do a thing if they’re going to do it together, before they start doing.

- Common knowledge: I know that you know the I know, over and over. Formal definition of common knowledge is infinite regress
 
The protocol/process of this session is a lecture - everyone is happy to  sit and listen to Meridith for an hour - in a different context Meredith talking for an hour would not be acceptable.
 
The 1980s Architectural compatibility

IBM defined a hardware format for the PC - it was easy to copy. Apple has done the same, but tightly controls hardware, you cannot buy their components on the market.

IBM tried to protect through courts and eventually lost.
 
- meatspace: how you are going to set standards within the organizations.
 
The 1990s Presentation-layer compatibility
Each browser manufacturer was coming up with their own dialects of html and layout and styling of webpages.
Microsoft tried to dominate the protocol making process.
Embrace (internet is great were gonna help) Extend (look at all these great new features) Extinguish (you cant know how they work)  They lost.
 
You use different language depending on who you are trying to express to (the Pope, a squatter, your gran).  
People react depending on how things are presented  depends on what you say and who is the audience.
 

part of what makes collaboration hard. It’s not just what you say, it’s also how your audience is primed to receive it.

 
The 2000s DRM wars
Meme: oh you like the kindle?
 
meatspace equivalent = process lock-in.  Process (and understanding of language) develops based on what works danger that it stops at some stage and does not develop following the context in which it develops.
 
Error correction  exception handling  
one in software is unwinding the stack the last one in is the first one out.
Pull things off until you find something that works 
 
IN meatspace  try and rewind to before the conflict and replay and see if you can sort it out  has worked well for Meredith.
 
Cyber crime laws in the US are a good example of laws(process) not developing written for how things used to be.
 
…And back to today

- stuff mostly just works.  Dont have to look at a manual or look at the help.

 
- Mainly because of common language, Http. Parallels to Elio’s suggestion
- There is a lot going on under the hood that you don’t see
 
All the big stuff today
 
Twitter, Facebook,Reddit, Slack
All centralized  they work, make money (from advertising)
centralized and are vulnerable 
if you not paying you the product
-Centralisation works, makes money and is vulnerable.
 

DIGG: Pulled down post containing dvd encryption key. Community went nuts. - almost every post had the key pasted in it.

REDDIT: Firing of employee who ran AMA made users go beserk, users hit back by making content private. 

TWITTER: Troll problems
 

What a bad miss

Thank you so much for posting this, Darren. I am heartbroken to have lost this session. It relates to my own work on Protocol:

https://edgeryders.eu/en/unmonastery/protocol-01-engineering-human-to-human-interaction-for

I’ll ask Meredith in person…

1 Like