367 stories

All Is Simple Parts Interacting Simply

1 Share

In physics, I got a BS in ’81, a MS in ’84, and published two peer-reviewed journal articles in ’03 & ’06. I’m not tracking the latest developments in physics very closely, but what I’m about to tell you is very old standard physics that I’m quite sure hasn’t changed. Even so, it seems to be something many people just don’t get. So let me explain it.

There is nothing that we know of that isn’t described well by physics, and everythingthat physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides.

For example, ordinary field theories have a limited number of fields at each point in space-time, with each field having a limited number of degrees of freedom. Each field has a few simple interactions with other fields, and with its own space-time derivatives. With limited energy, this latter effect limits how fast a field changes in space and time.

As a second example, ordinary digital electronics is made mostly of simple logic units, each with only a few inputs, a few outputs, and a few bits of internal state. Typically: two inputs, one output, and zero or one bits of state. Interactions between logic units are via simple wires that force the voltage and current to be almost the same at matching ends.

As a third example, cellular automatons are often taken as a clear simple metaphor for typical physical systems. Each such automation has a discrete array of cells, each of which has a few possible states. At discrete time steps, the state of each cell is a simple standard function of the states of that cell and its neighbors at the last time step. The famous “game of life” uses a two dimensional array with one bit per cell.

This basic physics fact, that everything is made of simple parts interacting simply, implies that anything complex, able to represent many different possibilities, is made of many parts. And anything able to manage complex interaction relations is spread across time, constructed via many simple interactions built up over time. So if you look at a disk of a complex movie, you’ll find lots of tiny structures encoding bits. If you look at an organism that survives in a complex environment, you’ll find lots of tiny parts with many non-regular interactions.

Physicists have learned that we only we ever get empirical evidence about the state of things via their interactions with other things. When such interactions the state of one thing create correlations with the state of another, we can use that correlation, together with knowledge of one state, as evidence about the other state. If a feature or state doesn’t influence any interactions with familiar things, we could drop it from our model of the world and get all the same predictions. (Though we might include it anyway for simplicity, so that similar parts have similar features and states.)

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth. For humans and their immediate environments on Earth, we know exactly what are all the parts, what states they hold, and all of their simple interactions. Thermodynamics assures us that there can’t be a lot of hidden states around holding many bits that interact with familiar states.

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies. When we can figure out quantities that are easier to calculate, as long as the parts and interactions we think are going on are in fact the only things going on, then we usually see those quantities just as calculated.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, they exist, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

Note that even if we are only complex arrangements of interacting parts, as social creatures it makes sense for us to care in a certain sense about each others’ “feelings.” Creatures like us maintain an internal “feeling” state that tracks how well things are going for us, with high-satisfied states when things are going well and and low-dissatisfied states when things are going badly. This internal state influences our behavior, and so social creatures around us want to try to infer this state, and to influence it. We may, for example, try to notice when our allies have a dissatisfied state and look for ways to help them to be more satisfied. Thus we care about others’ “feelings”, are wary of false indicators of them, and study behaviors in some detail to figure out what reliably indicates these internal states.

In the modern world we now encounter a wider range of creature-like things with feeling-related surface appearances. These include video game characters, movie characters, robots, statues, paintings, stuffed animals, and so on. And so it makes sense for us to apply our careful-study habits to ask which of these are “real” feelings, in the sense of being the those where it makes sense to apply our evolved feeling-related habits. But while it makes sense to be skeptical that any particular claimed feeling is “real” in this sense, it makes much less sense to apply this skepticism to “mere” physical systems. After all, as far as we know all familiar systems, and all the systems they interact with to any important degree, are mere physical systems.

If everything around us is explained by ordinary physics, then a detailed examination of the ordinary physics of familiar systems will eventually tells us everything there is to know about the causes and consequences of our feelings. It will say how many different feelings we are capable of, what outside factors influence them, and how our words and actions depend on them.

What more is or could be there to know about feelings than this? For example, you might ask: does a system have “feelings” if it has some of the same internal states as a human, but where those states have no dependence on outside factors and no influence on the world? But questions like this seem to me less about the world and more about what concepts are the most valuable to use in this space. While crude concepts served us well in the past, as we encounter a wider range of creature-like systems than before, we will need refine our concepts for this new world.

But, again, that seems to be more about what feelings concepts are useful in this new world, and much less about where feelings “really” are in the world. Physics call tell us all there is to say about that.

(This post is a followup to my prior poston Sean Carroll’s Big Picture.)

Read the whole story
17 days ago
Newcastle, United Kingdom
Share this story

Facebook Versus the Media

1 Share

Facebook found itself in the middle of another media controversy last week. Here’s the New York Times:

The image is iconic: A naked, 9-year-old girl fleeing napalm bombs during the Vietnam War, tears streaming down her face. The picture from 1972, which went on to win the Pulitzer Prize for spot news photography, has since been used countless times to illustrate the horrors of modern warfare.

But for Facebook, the image of the girl, Phan Thi Kim Phuc, was one that violated its standards about nudity on the social network. So after a Norwegian author posted images about the terror of war with the photo to Facebook, the company removed it.

The move triggered a backlash over how Facebook was censoring images. When a Norwegian newspaper, Aftenposten, cried foul over the takedown of the picture, thousands of people globally responded on Friday with an act of virtual civil disobedience by posting the image of Ms. Phuc on their Facebook pages and, in some cases, daring the company to act. Hours after the pushback, Facebook reinstated the photo across its site.

This, like many of Facebook’s recent run-ins with the media, has been like watching an old couple fight: they are nominally talking about the same episode, but in reality both are so wrapped up in their own issues and grievances that they are talking past each other.

Facebook Owns Facebook.com

Start with the media. Aftenposten Editor-in-chief Espen Egil Hansen wrote an open-letter to Facebook CEO Mark Zuckerberg that was, well, pretty amazing, and I’m not sure that’s a compliment:

Facebook has become a world-leading platform for spreading information, for debate and for social contact between persons. You have gained this position because you deserve it. But, dear Mark, you are the world’s most powerful editor. Even for a major player like Aftenposten, Facebook is hard to avoid. In fact we don’t really wish to avoid you, because you are offering us a great channel for distributing our content. We want to reach out with our journalism.

However, even though I am editor-in-chief of Norway’s largest newspaper, I have to realize that you are restricting my room for exercising my editorial responsibility. This is what you and your subordinates are doing in this case.

Actually, no, that is not what is happening at all. Aftenposten is not Facebook, and Facebook is not “Norway’s largest newspaper”. Accordingly, Facebook — and certainly not Mark Zuckerberg — did not take the photo down from Aftenposten.no. They did not block the print edition. They did not edit dear Espen. Rather, Facebook removed a post on Facebook.com, which Aftenposten does not own, and which Hansen admits in his own open letter is something freely offered to the newspaper, one that they take because it is “a great channel for distributing our content.”

Let me foreshadow what I will say later: Facebook screwed this up. But that doesn’t change the fact that Facebook.com is a private site, and while Aftenposten is more than happy to leverage Facebook for its own benefit that by no means suggests Aftenposten has a single iota of ownership over its page or anyone else’s.

The Freedom of the Internet

Unfortunately, Hansen’s letter gets worse:

The media have a responsibility to consider publication in every single case. This may be a heavy responsibility. Each editor must weigh the pros and cons. This right and duty, which all editors in the world have, should not be undermined by algorithms encoded in your office in California…

The least Facebook should do in order to be in harmony with its time is introduce geographically differentiated guidelines and rules for publication. Furthermore, Facebook should distinguish between editors and other Facebook-users. Editors cannot live with you, Mark, as a master editor.

I’ll be honest, this made me mad. Hansen oh-so-blithely presumes that he, simply by virtue of his job title, is entitled to special privileges on Facebook. But why, precisely, should that be the case? The entire premise of Facebook, indeed, the underpinning of the company’s success, is that it is a platform that can be used by every single person on earth. There are no gatekeepers, and certainly no outside editors. Demanding special treatment from Facebook because one controls a printing press is not only nonsensical it is downright antithetical to not just the premise of Facebook but the radical liberty afforded by the Internet. Hansen can write his open letter on aftenposten.no and I can say he’s being ridiculous on stratechery.com and there is not a damn thing anyone, including Mark Zuckerberg, can do about it.1

Make no mistake, I recognize the threats Facebook poses to discourse and politics; I’ve written about them it explicitly. There are very real concerns that people are not being exposed to news that makes them uncomfortable, and Hansen is right that the photo in question is an example of exactly why making people feel uncomfortable is so important.

But it should also not be forgotten that the prison of engagement-driving news that people are locking themselves in is one of their own making: no one is forced to rely on Facebook for news, just as Aftenposten isn’t required to post its news on Facebook. And on the flipside, the freedom and reach afforded by the Internet remain so significant that the editor-in-chief of a newspaper I had never previously read can force the CEO of one of the most valuable companies in the world to accede to his demands by rousing worldwide outrage.

These two realities are inescapably intertwined, and as a writer who almost certainly would have never been given an inch of space in Aftenposten, I’ll stick with the Internet.

Facebook is Not a Media Company

One more rant, while I’m on a roll: journalists everywhere are using this episode to again make the case that Facebook is a media company. This piece by Peter Kafka was written before this photo controversy but is an excellent case-in-point (and, sigh, it is another open letter):

Dear Mark, We get it. We understand why you don’t want to call Facebook a media company. Your investors don’t want to invest in a media company, they want to invest in a technology company. Your best-and-brightest engineers? They don’t want to work at a media company. And we’re not even going to mention Trending Topicgate here, because that would be rude.

But here’s the deal. When you gather people’s attention, and sell that attention to advertisers, guess what? You’re a media company. And you’re really good at it. Really, really good. Billions of dollars a quarter good.

Let’s be clear: Facebook could call themselves a selfie-stick company and their valuation wouldn’t change an iota. As Kafka notes later in the article Facebook gets all their content for free, which is a pretty big deal.

Indeed, I think one of the (many) reasons the media is so flummoxed with Facebook is that the company has stolen their business model and hugely improved on it. Remember, the entire reason why the media was so successful was because they made massive fixed cost investments in things like printing presses, delivery trucks, wireless spectrum, etc. that gave them monopolies or at worst oligopolies on local attention and thus advertising. The only fly flaw in the ointment was that actual content had to be created continuously, and that’s expensive.

Facebook, like all Internet companies, takes the leverage of fixed costs to an exponentially greater level and marries that with free content creation that is far more interesting to far more people than old media ever was, which naturally attracts advertisers. To put it in academic terms, the Internet has allowed Facebook to expand the efficient frontier of attention gathering and monetization, ruining most media companies’ business model.

In other words, had Kafka insisted that Facebook is an advertising company, just like media companies, I would nod in agreement. That advertising, though, doesn’t just run against journalism: it runs against baby pictures, small businesses, cooking videos and everything in between. Facebook may be everything to the media, but the media is one of many types of content on Facebook.

agreement; that, though, ruins his point.

In short, as long as Facebook doesn’t create content I think it’s a pretty big stretch to say they are a media company; it simply muddies the debate unnecessarily, and this dispute with Aftenposten is a perfect example of why being clear about the differences between a platform and a media company is important.

The Facebook-Media Disconnect

The disconnect in this debate reminds me of this picture:


Ignore the fact that Facebook owns a VR company; the point is this: Facebook is, for better or worse, running a product that is predicated on showing people exactly what they want to see, all the way down to the individual. And while there is absolutely editorial bias in any algorithm, the challenge is indeed a technical one being worked out at a scale few can fully comprehend.

That Norwegian editor-in-chief, meanwhile, is still living in a world in which he and other self-appointed gatekeepers controlled the projector for the front of the room, and the facts of this particular case aside, it is awfully hard to avoid the conclusion that he and the rest of the media feel entitled to individuals’ headsets.

Facebook’s Mistake

Still, the facts of this case do matter: first off, quite obviously this photo should have never been censored, even if the initial flagging was understandable. What is really concerning, though, was the way Facebook refused to back down, not only continuing to censor the photo but actually barring the journalist who originally posted it from the platform for three days. Yes, this was some random Facebook staffer in Hamburg, but that’s the exact problem! No one at Facebook’s headquarters seems to care about this stuff unless it turns into a crisis, which means crises said crisis are only going to continue with potentially unwanted effects.

The truth is that Facebook may not be a media company, but users do read a lot of news there; by extension, the company may not have a monopoly in news distribution, but the impact of so many self-selecting Facebook as their primary news source has significant effects on society. And, as I’ve noted repeatedly, society and its representatives may very well strike back; this sort of stupidity via apathy will only hasten the reckoning.2

  1. It should be noted that this is exactly why the Peter Thiel-Gawker episode was so concerning.
  2. And, I’d add, this is exactly why I think Facebook should have distanced itself from Thiel themselves from Thiel
Read the whole story
43 days ago
Newcastle, United Kingdom
Share this story

Instagram to third-party developers: drop dead

popular shared this story from Zeldman on Web & Interaction Design.

I’M pretty much done with Instagram. I never loved it, but it’s where most of my friends looked for my photos, so I made peace with it as a platform—and continued to use poor, old, widely unloved Flickr for more serious photo sharing. Now, though, for all I care, Instagram can get bent.

There’s a lot you can’t do with Instagram natively, but clever third-party programmers have made the platform useful and enjoyable for people who wanted more. And now, that’s over.

Instagram lowers the boom

On June 1, Instagram severely restricted what any third-party Instagram application can do. Not only can third-party apps no longer provide the features Instagram’s API supports but Instagram itself doesn’t offer; they can’t even compete with the restricted feature set Instagram natively provides.

The change in rules applies to all Instagram apps, on every mobile and desktop platform you can think of. Among the new restrictions:

  • Third-party apps can no longer display the Instagram feed.
  • They can no longer display “popular.”
  • They can’t show the follows or followers of any user profile.
  • Or let you download images.
  • Or let you like or comment on several images at once.
  • Or let you block tags and users of your choosing.

Most users didn’t need these features to enjoy Instagram, but they made it a far richer program for those who did. Nor does it look like Instagram intends to provide the functions it has just prevented the third-party apps from offering. The old Twitter gambit—learn from third-party apps; change your own offerings to match theirs; then change your API—looks positively user- and business-friendly by comparison. (More on the Twitter comparison in a moment.)

Instagram: success through limitation

Now, I have no problem with Instagram offering a limited feature set. Most great apps reach mass appeal precisely by focusing on a restricted feature set, designed for one or two use cases. And clearly Instagram knows how to reach mass appeal.

Instagram’s lack of feature depth has not prevented it from serving its core base of teenage celebrity photo followers. It doesn’t prevent entertainers and brands from using the platform as a publicity and marketing vehicle. It doesn’t stop amateur swimsuit models and photographers from building fan bases on the fringe of mainstream use. Those are the users and use cases Instagram was built to serve, and it serves them well. Its lack of additional features has never hurt it with these users, and its decision to kill off third-party apps shouldn’t cost Instagram a single customer from among the target user types I’ve just identified.

But it bugs me enough to make me walk away.

There’s two things here: one, the functionality Instagram has taken away mattered to me as a user. And two, I don’t like what this giant, ludicrously successful company just did to a bunch of small companies run by independent developers. I mean, it’s not like these third-party companies stole the API from Instagram. Instagram offered it—and for the reason every successful product does: to let other companies extend its capabilities and increase its passionate fan base.

Makes Twitter look like sweethearts

Twitter, again, is the perfect example. In 2006, it began building a following among people like you and me, while offering a very limited feature set. In the next few years, it extended its functionality by learning from its users and by monitoring the innovations pioneered by third-party products like Twitterific, Tweetbot, TweetDeck, and Hootsuite—innovations that made Twitter more popular and more essential to marketers, journalists, and other professional users. Eventually Twitter bought one of the third-party apps and incorporated its features (along with features developed by other third-party apps) into its core product.

Today, with Twitter’s offerings more robust as a consequence of this third-party development history, there’s arguably less need for some third-party Twitter apps. That is to say, even power users can have pretty feature-rich Twitter experiences while using Twitter’s native app or its website. Nonetheless, the third-party apps still exist, still offer experiences Twitter doesn’t, and still earn revenue for their designers and developers.

As an extremely active Twitter user for personal and business reasons, I sometimes find Twitter’s website or native app sufficient to my needs; and at other times, I need the power a third-party app provides. I know that Twitter hasn’t always made it easy for third-party developers—and I was personally chagrined when significant changes to Twitter’s API killed a little free product a design conference I co-founded built strictly for the pleasure of our attendees. But, Twitter didn’t murder its third-party ecosystem, and it didn’t obliterate features that matter to secondary but passionate users.

And Instagram just did.

Goodbye to all that

Instagram certainly won’t miss me, and its decision makers won’t read this. Nor, if they read it, would they care. So this is about me. And a slightly sick feeling in my stomach.

Not because I even really need those extra Instagram features. Flickr, while it yet lives, provides me with far richer layers of experience and capability than even the most tricked-out third-party Instagram app could dream of. I always used Instagram under protest, as a poor cousin. I used it because people were there, not because I liked it. I like Flickr, even though posting my photos there is kind of like leaving flowers at the grave of someone whose name I’ve forgotten.

No, I feel queasy because I can’t decide whether Instagram is just a bully that decided to beat up the small fry independent developers, or (more likely) a clumsy, drunken giant that doesn’t feel the bodies squashing under its feet.

And we thought Instagram was over when they changed the logo last month.

See also:

The post Instagram to third-party developers: drop dead appeared first on Zeldman on Web & Interaction Design.

Read the whole story
134 days ago
West Grove, PA
134 days ago
Newcastle, United Kingdom
Share this story

Apple’s actual role in podcasting: be careful what you wish for

4 Comments and 10 Shares

This New York Times article gets a lot wrong, and both podcast listeners and podcast producers should be clear on Apple’s actual role in podcasting is today and what, exactly, big producers are asking for.

Podcasts work nothing like the App Store, and we’re all better off making sure they never head down that road.

Podcasts still work like old-school blogs:

  • Each podcast can be hosted anywhere and completely owned and controlled by its producer.
  • Podcast-player apps periodically check each subscribed podcast’s RSS feed, and when a new episode is published, they fetch the audio file directly from the producer’s site or host.
  • Monetization and analytics are completely up to the podcasters.
  • Some podcasts have their own custom listening apps that provide their creators with more data and monetization opportunities.

It’s completely decentralized, free, fair, open, and uncontrollable by any single entity, as long as the ecosystem of podcast-player apps remains diverse enough that no app can dictate arbitrary terms to publishers (the way Facebook now effectively controls the web publishing industry).1

Apple holds two large roles in podcasting today that should threaten its health, but haven’t yet:

  • The biggest player app: Apple’s built-in iOS Podcasts app is the biggest podcast player in the world by a wide margin, holding roughly 60–70% marketshare.
  • The biggest podcast directory: The iTunes Store’s Podcasts directory is the only one that matters, and being listed there is essential for podcasts to be easily found when searching in most apps.

Critically, despite having these large roles, Apple never locked out other players, dictated almost any terms to podcasters,2 or inserted themselves as an intermediary beyond the directory stage.

Like most of the iTunes Store, the podcast functionality has been almost completely unchanged since its introduction over a decade ago. And unlike the rest of the Store, we’re all better off if it stays this way.


Apple’s directory gives podcast players the direct RSS feed of podcasts found there, and then the players just fetch directly from the publisher’s feeds from that point forward. Apple is no longer a party to any activity after the search unless you’re using Apple’s player app.

There’s nothing stopping anyone else from making their own directory (a few have), and any good podcast player will let users bypass directories and subscribe to any podcast in the world by pasting in its URL.


Apple’s editorial features are unparalleled in the industry. I don’t know of anyone who applies more human curation to podcasts than Apple.

The algorithmic “top” charts, as far as podcasters have been able to piece together, are based primarily (or solely) on the rate of new subscriptions to a podcast in Apple Podcasts for iOS and iTunes for Mac.

Subscriptions happening in other apps have no effect on Apple’s promotional charts because, as long as this remains decentralized and open, Apple has no way of knowing about them.


Apple’s Podcasts app for iOS is fine, but not great, leaving the door wide open for better apps like mine. (Seriously, it’s much better, and it’s free. Trying to succeed in the App Store in 2016 is neither the time nor the place for modesty.)

Apple’s app has only a few integrations and privileges that third-party apps can’t match, and they’re of ever-decreasing relevance. They haven’t locked down the player market at all.

So let’s get back to that misguided New York Times article.

What (big) podcasters are asking for

Ignoring for the moment that “podcasters” in news articles usually means “a handful of the largest producers, a friend or two of the reporter, and a press release from The Midroll, who collectively believe they represent all podcasters, despite only being the mass-market tip of the iceberg, as if CBS represented all of television or Business Insider represented all of blogging,” and this article is no exception, what these podcasters are asking for is the same tool web publishers have used and abused to death over the last decade to systematically ruin web content nearly everywhere:

“More data.”

On the web, getting more data was easy: web pages are software, letting their publishers use JavaScript to run their own code right in your “player app” (web browser) to creepily record and analyze every move you made, selling you more effectively to advertisers and letting them algorithmically tailor their content to maximize those pennies at any cost to quality and ethics.

Podcasts are just MP3s. Podcast players are just MP3 players, not platforms to execute arbitrary code from publishers. Publishers can see which IP addresses are downloading the MP3s, which can give them a rough idea of audience size, their approximate locations, and which apps they use. That’s about it.

They can’t know exactly who you are, whether you searched for a new refrigerator yesterday, whether you listened to the ads in their podcasts, or even whether you listened to it at all after downloading it.3

Big publishers think this is barbaric. I think it’s beautiful.

Big publishers think this is holding back the medium. I think it protects the medium.

And if that ill-informed New York Times article is correct in broad strokes, which is a big “if” given how much it got wrong about Apple’s role in podcasting, big podcasters want Apple to add more behavioral data and creepy tracking to the Apple Podcasts app, then share the data with them. I wouldn’t hold my breath on that.

By the way, while I often get pitched on garbage podcast-listening-behavioral-data integrations, I’m never adding such tracking to Overcast. Never. The biggest reason I made a free, mass-market podcast app was so I could take stands like this.

Big podcasters also apparently want Apple to insert itself as a financial intermediary to allow payment for podcasts within Apple’s app. We’ve seen how that goes. Trust me, podcasters, you don’t want that.

It would not only add rules, restrictions, delays, and big commissions, but it would increase Apple’s dominant role in podcasts, push out diversity, give Apple far more control than before, and potentially destroy one of the web’s last open media ecosystems.

Podcasting has been growing steadily for over a decade and extends far beyond the top handful of public-radio shows. Their needs are not everyone’s needs, they don’t represent everyone, and many podcasters would not consider their goals an “advancement” of the medium.

Apple has only ever used its dominant position benevolently and benignly so far, and as the medium has diversified, Apple’s role has shrunk. The last thing podcasters need is for Apple to increase its role and dominance.

And the last thing we all need is for the “data” economy to destroy another medium.

  1. Companies running completely proprietary podcast platforms so far, trying to lock it down for themselves: Stitcher, TuneIn, Spotify, Google. (I haven’t checked in a while: has everyone finally stopped believing Google gives a damn about being “open”?) 

  2. Beyond prohibiting pornographic podcasts in their directory and loosely encouraging publishers to properly use the “Explicit” tag. 

  3. Unless you listen with the podcast publisher’s own app, in which case they can be just as creepy as on the web, if not more so. But as long as the open, RSS-based ecosystem of podcast players remains dominant, including Apple Podcasts, virtually nobody can afford to lock down their podcasts to only be playable from their own app. 

Read the whole story
172 days ago
Newcastle, United Kingdom
Share this story
4 public comments
171 days ago
Spot on.
Space City, USA
172 days ago
Right on point.
Apple Valley, CA
172 days ago
172 days ago
Marco gets a lot wrong, but not this.
Bend, Oregon
172 days ago
What is this new weird, dirty feeling? Think you call it agreeing?

Performance Culture

1 Share

In this essay, I’ll talk about “performance culture.” Performance is one of the key pillars of software engineering, and is something that’s hard to do right, and sometimes even difficult to recognize. As a famous judge once said, “I know it when I see it.” I’ve spoken at length about performance and culture independently before, however the intersection of the two is where things get interesting. Teams who do this well have performance ingrained into nearly all aspects of how the team operates from the start, and are able to proactively deliver loveable customer experiences that crush the competition. There’s no easy cookie-cutter recipe for achieving a good performance culture, however there are certainly some best practices you can follow to plant the requisite seeds into your team. So, let’s go!


Why the big focus on performance, anyway?

Partly it’s my background. I’ve worked on systems, runtimes, compilers, … things that customers expect to be fast. It’s always much easier to incorporate goals, metrics, and team processes at the outset of such a project, compared to attempting to recover it later on. I’ve also worked on many teams, some that have done amazing at this, some that have done terribly, and many in between. The one universal truth is that the differentiating factor is always culture.

Partly it’s because, no matter the kind of software, performance is almost always worse than our customers would like it to be. This is a simple matter of physics: it’s impossible to speed up all aspects of a program, given finite time, and the tradeoffs involved between size, speed, and functionality. But I firmly believe that on the average teams spend way less attention to developing a rigorous performance culture. I’ve heard the “performance isn’t a top priority for us” statement many times only to later be canceled out by a painful realization that without it the product won’t succeed.

And partly it’s just been top of mind for all of us in DevDiv, as we focus on .NET core performance, ASP.NET scalability, integrating performance-motivated features into C# and the libraries, making Visual Studio faster, and more. It’s particularly top of mind for me, as I’ve been comparing our experiences to my own in Midori (which heavily inspired this blog post).

Diagnosis and The Cure

How can you tell whether your performance culture is on track? Well, here are some signs that it’s not:

  • Answering the question, “how is the product doing on my key performance metrics,” is difficult.
  • Performance often regresses and team members either don’t know, don’t care, or find out too late to act.
  • Blame is one of the most common responses to performance problems (either people, infrastructure, or both).
  • Performance tests swing wildly, cannot be trusted, and are generally ignored by most of the team.
  • Performance is something one, or a few, individuals are meant to keep an eye on, instead of the whole team.
  • Performance issues in production are common, and require ugly scrambles to address (and/or cannot be reproduced).

These may sound like technical problems. It may come as a surprise, however, that they are primarily human problems.

The solution isn’t easy, especially once your culture is in the weeds. It’s always easier to not dig a hole in the first place than it is to climb out of one later on. But the first rule when you’re in a hole is to stop digging! The cultural transformation must start from the top – management taking an active role in performance, asking questions, seeking insights, demanding rigor – while it simultaneously comes from the bottom – engineers actively seeking to understand performance of the code they are writing, ruthlessly taking a zero-tolerance stance on regressions, and being ever-self-critical and on the lookout for proactive improvements.

This essay will describe some ways to ensure this sort of a culture, in addition to some best practices I’ve found help to increase its effectiveness once you have one in place. A lot of it may seem obvious, but believe me, it’s pretty rare to see everything in here working in harmony in practice. But when it is, wow, what a difference it can make.

A quick note on OSS software. I wrote this essay from the perspective of commercial software development. As such, you’ll see the word “management” a lot. Many of the same principles work in OSS too. So, if you like, anytime you see “management,” mentally transform it into “management or the project’s committers.”

It Starts, and Ends, with Culture

The key components of a healthy performance culture are:

  1. Performance is part of the daily dialogue and “buzz” within the team. Everybody plays a role.
  2. Management must care – truly, not superficially – about good performance, and understand what it takes.
  3. Engineers take a scientific, data-driven, inquisitive approach to performance. (Measure, measure, measure!)
  4. Robust engineering systems are in place to track goals, past and present performance, and to block regressions.

I’ll spend a bit of time talking about each of these roughly in turn.

Dialogue, Buzz, and Communication

The entire team needs to be on the hook for performance.

In many teams where I’ve seen this go wrong, a single person is annointed the go-to performance guy or gal. Now, that’s fine and can help the team scale, and can be useful when someone needs to spearhead an investigation, and having a vocal advocate of performance is great, but it must not come at the expense of the rest of the team’s involvement.

This can lead to problems similar to those Microsoft use to have with the “test” discipline; engineers learned bad habits by outsourcing the basic quality of their code, assuming that someone else would catch any problems that arise. The same risks are present happening when there’s a central performance czar: engineers on the team won’t write performance tests, won’t proactively benchmark, won’t profile, won’t ask questions about the competetive standing of the product, and generally won’t do all the things you need all of the engineers doing to build a healthy performance culture.

Magical things happen when the whole team is obsessed about performance. The hallways are abuzz with excitement, as news of challenges and improvements spreads organically. “Did you see Martin’s hashtable rebalancing change that reduced process footprint by 30%?” “Jared just checked in a feature that lets you stack allocate arrays. I was thinking of hacking the networking stack this weekend to use it – care to join in?” Impromptu whiteboard sessions, off-the-cuff ideation, group learning. It’s really awesome to see. The excitement and desire to win propels the team forward, naturally, and without relying on some heavyweight management “stick.”

I hate blame and I hate defensiveness. My number one rule is “no jerks,” so naturally all critiques must be delivered in the most constructive and respectful way. I’ve found a high occurrence of blame, defensiveness, and intellectual dishonesty in teams that do poorly on performance, however. Like jerks, these are toxic to team culture and must be weeded out aggressively. It can easily make or break your ability to develop the right performance culture. There’s nothing wrong with saying we need to do better on some key metric, especially if you have some good ideas on how to do so!

In addition to the ad-hoc communication, there of course needs to be structured communication also. I’ll describe some techniques later on. But getting a core group of people in a room regularly to discuss the past, present, and future of performance for a particular area of the product is essential. Although the organic conversations are powerful, everyone gets busy, and it’s important to schedule time as a reminder to keep pushing ahead.

Management: More Carrots, Fewer Sticks

In every team with a poor performance culture, it’s management’s fault. Period. End of conversation.

Engineers can and must make a difference, of course, but if the person at the top and everybody in between aren’t deeply involved, budgeting for the necessary time, and rewarding the work and the superstars, the right culture won’t take hold. A single engineer alone can’t possibly infuse enough of this culture into an entire team, and most certainly not if the entire effort is running upstream against the management team.

It’s painful to watch managers who don’t appreciate performance culture. They’ll often get caught by surprise and won’t realize why – or worse, think that this is just how engineering works. (“We can’t predict where performance will matter in advance!”) Customers will complain that the product doesn’t perform as expected in key areas and, realizing it’s too late for preventative measures, a manager whose team has a poor performance culture will start blaming things. Guess what? The blame game culture spreads like wildfire, the engineers start doing it too, and accountability goes out the window. Blame doesn’t solve anything. Blaming is what jerks do.

Notice I said management must be “deeply involved”: this isn’t some superficial level of involvement. Sure, charts with greens, reds, and trendlines probably need to be floating around, and regular reviews are important. I suppose you could say that these are pointy-haired manager things. (Believe me, however they do help.) A manager must go deeper than this, however, proactively and regularly reviewing the state of performance across the product, alongside the other basic quality metrics and progress on features. It’s a core tenet of the way the team does its work. It must be treated as such. A manager must wonder about the competetive landscape and ask the team good, insightful questions that get them thinking.

Performance doesn’t come for free. It costs the team by forcing them to slow down at times, to spend energy on things other than cranking out features, and hence requires some amount of intelligent tradeoff. How much really depends on the scenario. Managers need to coach the team to spend the right ratio of time. Those who assume it will come for free usually end up spending 2-5x the amount it would have taken, just at an inopportune time later on (e.g., during the endgame of shipping the product, in production when trying to scale up from 1,000 customers to 100,000, etc).

A mentor of mine used to say “You get from your teams what you reward.” It’s especially true with performance and the engineering systems surrounding them. Consider two managers:

  • Manager A gives lip service to performance culture. She, however, packs every sprint schedule with a steady stream of features – “we’ve got to crush competitor Z and must reach feature parity!” – with no time for breaks in-between. She spends all-hands team meetings praising new features, demos aplenty, and even gives out a reward to an engineer at each one for “the most groundbreaking feature.” As a result, her team cranks out features at an impressive clip, delivers fresh demos to the board every single time, and gives the sales team plenty of ammo to pursue new leads. There aren’t performance gates and engineers generally don’t bother to think much about it.

  • Manager B takes a more balanced approach. She believes that given the competetive landscape, and the need to impress customers and board members with whizbang demos, new features need to keep coming. But she is also wary of building up too much debt in areas like performance, reliability, and quality for areas she expects to stick. So she intentionally puts her foot on the brake and pushes the team just as hard on these areas as she does features. She demands good engineering systems and live flighting of new features with performance telemetry built-in, for example. This requires that she hold board members and product managers at bay, which is definitely unpopular and difficult. In addition to a reward for “the most groundbreaking feature” award at each all-hands, she shows charts of performance progress and delivers a “performance ninja” award too, to the engineer who delivered the most impactful performance improvement. Note that engineering systems improvements also qualify!

Which manager do you think is going to ship a quality product, on time, that customers are in love with? My money is on Manager B. Sometimes you’ve got to slow down to speed up.

Microsoft is undergoing two interesting transitions recently that are related to this point: on one hand, the elimination of “test” as a discipline mentioned earlier; and, on the other hand, a renewed focus on engineering systems. It’s been a bumpy ride. Surprisingly, one of the biggest hurdles to get over wasn’t with the individual engineers at all – it was the managers! “Development managers” in the old model got used to focusing on features, features, features, and left most of the engineering systems work to contractors, and most of the quality work to testers. As a result, they were ill-prepared to recognize and reward the kind of work that is essential to building a great performance culture. The result? You guessed it: a total lack of performance culture. But, more subtly, you also ended up with “leadership holes”; until recently, there were virtually no high-ranking engineers working on the mission-critical engineering systems that make the entire team more productive and capable. Who wants to make a career out of the mere grunt work assigned to contractors and underappreciated by management? Again, you get what you reward.

There’s a catch-22 with early prototyping where you don’t know if the code is going to survive at all, and so the temptation is to spend zero time on performance. If you’re hacking away towards a minimum viable product (MVP), and you’re a startup burning cash, it’s understandable. But I strongly advise against this. Architecture matters, and some poorly made architectural decisions made at the outset can lay the foundation for an entire skyscraper of ill-performing code atop. It’s better to make performance part of the feasibility study and early exploration.

Finally, to tie up everything above, as a manager of large teams, I think it’s important to get together regularly – every other sprint or two – to review performance progress with the management team. This is in addition to the more fine-grained engineer, lead, and architect level pow-wows that happen continuously. There’s a bit of a “stick” aspect of such a review, but it’s more about celebrating the team’s self-driven accomplishments, and keeping it on management’s radar. Such reviews should be driven from the lab and manually generated numbers should be outlawed.

Which brings me to …

Process and Infrastructure

“Process and infrastructure” – how boring!

Good infrastructure is a must. A team lacking the above cultural traits won’t even stop to invest in infrastructure; they will simply live with what should be an infuriating lack of rigor. And good process must ensure effective use of this infrastructure. Here is the bare minimum in my book:

  • All commits must pass a set of gated performance tests beforehand.
  • Any commits that slip past this and regress performance are reverted without question. I call this the zero tolerance rule.
  • Continuous performance telemetry is reported from the lab, flighting, and live environments.
  • This implies that performance tests and infrastructure have a few important characteristics:
    • They aren’t noisy.
    • They measure the “right” thing.
    • They can be run in a “reasonable” amount of time.

I have this saying: “If it’s not automated, it’s dead to me.”

This highlights the importance of good infrastructure and avoids the dreaded “it worked fine on my computer” that everybody, I’m sure, has encountered: a test run on some random machine – under who knows what circumstances – is quoted to declare success on a benchmark… only to find out some time later that the results didn’t hold. Why is this?

There are countless possibilities. Perhaps a noisy process interfered, like AntiVirus, search indexing, or the application of operating system updates. Maybe the developer accidentally left music playing in the background on their multimedia player. Maybe the BIOS wasn’t properly adjusted to disable dynamic clock scaling. Perhaps it was due to an innocent data-entry error when copy-and-pasting the numbers into a spreadhseet. Or maybe the numbers for two comparison benchmarks came from two, incomparable machine configurations. I’ve seen all of these happen in practice.

In any manual human activity, mistakes happen. These days, I literally refuse to look at or trust any number that didn’t come from the lab. The solution is to automate everything and focus your energy on making the automation infrastructure as damned good as possible. Pay some of your best people to make this rock solid for the rest of the team. Encourage everybody on the team to fix broken windows, and take a proactive approach to improving the infrastructure. And reward it heartily. You might have to go a little slower, but it’ll be worth it, trust me.

Test Rings

I glossed over a fair bit above when I said “all commits must pass a set of performance tests,” and then went on to talk about how a checkin might “slip past” said tests. How is this possible?

The reality is that it’s usually not possible to run all tests and find all problems before a commit goes in, at least not within a reasonable amount of time. A good performance engineering system should balance the productivity of speedy codeflow with the productivity and assurance of regression prevention.

A decent approach for this is to organize tests into so-called “rings”:

  • An inner ring containing tests that all developers on the team measure before each commit.
  • An inner ring containing tests that developers on your particular sub-team measure before each commit.
  • An inner ring containing tests that developers run at their discretion before each commit.
  • Any number of successive rings outside of this:
    • Gates for each code-flow point between branches.
    • Post-commit testing – nightly, weekly, etc. – based on time/resource constraints.
    • Pre-release verification. testing.
    • Post-release telemetry and monitoring.

As you can see, there’s a bit of flexibility in how this gets structured in practice. I wish I could lie and say that it’s a science, however it’s an art that requires intelligently trading off many factors. This is a constant source of debate and one that the management team should be actively involved in.

A small team might settle on one standard set of benchmarks across the whole team. A larger team might need to split inner ring tests along branching lines. And no matter the size, we would expect the master/main branch to enforce the most important performance metrics for the whole team, ensuring no code every flows in that damages a core scenario.

In some cases, we might leave running certain pre-commit tests to the developer’s discretion. (Note, this does not mean running pre-commit tests altogether is optional – only a particular set of them!) This might be the case if, for example, the test covered a lesser-used component and we know the nightly tests would catch any post-commit regression. In general, when you have a strong performance culture, it’s okay to trust judgement calls sometimes. Trust but verify.

Let’s take a few concrete examples. Performance tests often range from micro to macro in size. These typically range from easier to harder to pinpoint the source of a regression, respectively. (Micro measures just one thing, and so fluctuations tend to be easier to understand, whereas macro measures an entire system, where fluctuations tend to take a bit of elbow grease to track down.) A web server team might include a range of micro and macro tests in the innermost pre-commit suite of tests: number of bytes allocated per requests (micro), request response time (micro), … perhaps a half dozen other micro-to-midpoint benchmarks …, and TechEmpower (macro), let’s say. Thanks to lab resources, test parallelism, and the awesomeness of GitHub webhooks, resources and test parallelism, let’s say these all complete in 15 minutes, nicely integrated into your pull request and code review processes. minutes. Not too bad. But this clearly isn’t perfect coverage. Maybe every night, TechEmpower is run for 4 hours, to measure performance over a longer period of time to identify leaks. It’s possible a developer could pass the pre-commit tests, and then fail this longer test, of course. Hence, the team lets developers run this test on-demand, so a good doobie can avoid getting egg on his or her face. But alas, mistakes happen, and again there isn’t a culture of blame or witchhunting; it is what it is.

This leads me to back to the zero tolerance rule.

Barring exceptional circumstances, regressions should be backed out immediately. In teams where I’ve seen this succeed, there were no questions asked, and no IOUs. As soon as you relax this stance, the culture begins to break down. Layers of regressions pile on top of one another and you end up ceding ground permanently, no matter the best intentions of the team. The commit should be undone, the developer should identify the root cause, remedy it, ideally write a new test if appropriate, and then go through all the hoops again to submit the checkin, this time ensuring good performance.

Measurement, Metrics, and Statistics

Decent engineers intuit. Good engineers measure. Great engineers do both.

Measure what, though?

I put metrics into two distinct categories:

  • Consumption metrics. These directly measure the resources consumed by running a test.
  • Observational metrics. These measure the outcome of running a test, observationally, using metrics “outside” of the system.

Examples of consumption metrics are hardware performance counters, such as instructions retired, data cache misses, instruction cache misses, TLB misses, and/or context switches. Software performance counters are also good candidates, like number of I/Os, memory allocated (and collected), interrupts, and/or number of syscalls. Examples of observational metrics include elapsed time and cost of running the test as billed by your cloud provider. Both are clearly important for different reasons.

Seeing a team measure time and time alone literally brings me to tears. It’s a good measure of what an end-user will see – and therefore makes a good high-level test – however it is seriously lacking in the insights it can give you. And if there’s no visibility into variance, variants, it can be borderline useless.

Consumption metrics are obviously much more helpful to an engineer trying to understand why something changed. In our above web server team example, imagine request response time regressed by 30%. All the test report tells us is the time. It’s true, a developer can then try to reproduce the scenario locally, and manually narrow down the cause, however can be tedious, takes time, and is likely imperfect due to differences in lab versus local hardware. What if, instead, both instructions retired and memory allocated were reported alongside the regression in time? From this, it could be easy to see that suddenly 256KB of memory was being allocated per request that wasn’t there before. Being aware of recent commits, this could make it easy for an engineer to quickly pinpoint and back out the culprit in a timely manner before additional commits pile on top, further obscuring the problem. It’s like printf debugging.

Speaking of printf debugging, telemetry is essential for long-running tests. Even low-tech approaches like printfing the current set of metrics every so often (e.g., every 15 seconds), can help track down where something went into the weeds simply by inspecting a database or logfile. Imagine trying to figure out where the 4-hour web server test went off the rails at around the 3 1/2 hour mark. This can can be utterly maddening without continuous telemetry! Of course, it’s also a good idea to go beyond this. The product should have a built-in way to collect this telemtry out in the wild, and correlate it back to key metrics. StatsD is a fantastic option.

Finally, it’s important to measure these metrics as scientifically as possible. That includes tracking standard deviation, coefficient of variation (CV), and geomean, and using these to ensure tests don’t very wildly from one run to the next. (Hint: commits that tank CV should be blocked, just as those that tank the core metric itself.) Having a statistics wonk on your team is also a good idea!

Goals and Baselines

Little of the above matters if you lack goals and baselines. For each benchmark/metric pair, I recommend recognizing four distinct concepts in your infrastructure and processes:

  • Current: the current performance (which can span multiple metrics).
  • Baseline: the threshold the product must stay above/below, otherwise tests fail.
  • Sprint Goal: where the team must get to before the current sprint ends.
  • Ship Goal: where the team must get to in order to ship a competetive feature/scenario.

Assume a metric where higher is better (like throughput); then it’s usually the case that Ship Goal >= Sprint Goal >= Current >= Baseline. As wins and losses happen, continual adjustments should be made.

For example, a “baseline ratcheting” process is necessary to lock in improvements. A reasonable approach is to ratchet the baseline automatically to within some percentage of the current performance, ideally based on standard deviation and/or CV. Another approach is to require that developers do it manually, so that all ratcheting is intentional and accounted for. And interestingly, you may find it helpful to ratchet in the other direction too. That is, block commits that improve performance dramatically and yet do not ratchet the baseline. This forces engineers to stop and think about whether performance changes were intentional – even the good ones! A.k.a., “confirm your kill.”

It’s of course common that sprint goals remain stable from one sprint to the next. All numbers can’t be improving all the time. But this system also helps to ensure that the team doesn’t backslide on prior achievements.

I’ve found it useful to organize sprint goals behind themes. Make this sprint about “server performance.” Or “shake out excessive allocations.” Or something else that gives the team a sense of cohesion, shared purpose, and adds a little fun into the mix. As managers, we often forget how important fun is. It turns out performance can be the greatest fun of all; it’s hugely measurable – which engineers love – and, speaking for myself, it’s a hell of a time to pull out the scythe and start whacking away! It can even be a time to learn as a team, and to even try out some fun, new algorithmic techniques, like bloom filters.

Not every performance test needs this level of rigor. Any that are important enough to automatically run pre-commit most certainly demand it. And probably those that are run daily or monthly. But managing all these goals and baselines and whatnot can get really cumbersome when there are too many of them. This is a real risk especially if you’re tracking multiple metrics for each of your benchmarks.

This is where the idea of “key performance indicators” (KPIs) becomes very important. These are the performance metrics important enough to track at a management level, to the whole team how healthy the overall product is at any given time. In my past team who built an operating system and its components, this included things like process startup time, web server throughput, browser performance on standard industry benchmarks, and number of frames dropped in our realtime audio/video client, including multiple metrics apiece plus the abovementioned statistics metrics. These were of course in the regularly running pre- and post-commit test suites, but rolling them up in one place, and tracking against the goals, was a hugely focusing exercise.

In Summary

This post just scratches the surface of how to do good performance engineering, but I hope you walk away with at least one thing: doing performance well is all about having a great performance culture.

This culture needs to be infused throughout the entire organization, from management down to engineers, and everybody in between. It needs to be transparent, respectful, aggressive, data-driven, self-critical, and relentlessly ambitious. Great infrastructure and supporting processes are a must, and management needs to appreciate and reward these, just as they would feature work (and frequently even more). Only then will the self-reinforcing flywheel get going.

Setting goals, communicating regularly, and obsessively tracking goals and customer-facing metrics is paramount.

It’s not easy to do everything I’ve written in this article. It’s honestly very difficult to remember to slow down and be disciplined in these areas, and it’s easy to trick yourself into thinking running as fast as possible and worrying about performance later is the right call. Well, I’m sorry to tell you, sometimes it is. You’ve got to use your intuition and your gut, however, in my experience, we tend to undervalue performance considerably compared to features.

If you’re a manager, your team will thank you for instilling a culture like this, and you’ll be rewarded by shipping better performing software on schedule. this. If you’re an engineer, I guarantee you’ll spend far less time scrambling, more time being proactive, and more time having fun, in a team obsessed over customer performance. I’d love to hear what you think and your own experiences establishing a performance culture.

Read the whole story
192 days ago
Newcastle, United Kingdom
Share this story

Does Money Ruin Everything?

1 Share

Imagine someone said:

The problem with paying people to make shoes is that then they get all focused on the money instead of the shoes. People who make shoes just because they honestly love making shoes, and who aren’t paid anything at all, make better shoes. Once money gets involved people lie about how good their shoes are, and about which shoes they like how much. But without money involved, everyone is nice and honest and efficient. That’s the problem with capitalism; money ruins everything.

Pretty sad argument, right? Now read Tyler Cowen on betting:

This episode is a good example of what is wrong with betting on ideas. Betting tends to lock people into positions, gets them rooting for one outcome over another, it makes the denouement of the bet about the relative status of the people in question, and it produces a celebratory mindset in the victor. That lowers the quality of dialogue and also introspection, just as political campaigns lower the quality of various ideas — too much emphasis on the candidates and the competition. Bryan, in his post, reaffirms his core intuition that labor markets usually return to normal pretty quickly, at least in the United States. But if you scrutinize the above diagram, as well as the lackluster wage data, that is exactly the premise he should be questioning. (more)

Sure, relative to ideal people who only discuss and think about topics with a full focus on and respect for the truth and their disputants, what could be the advantage of bets? Money will only distract them from studying truth, right?

But just because people don’t bet doesn’t mean they don’t have plenty of other non-truth-oriented incentives and interests. They are often rooting for positions, and celebrating some truths over others, due to these other interests. Bet incentives are at least roughly oriented toward speaking truth; the other incentives, not so much. Don’t let the fictional best be the enemy of the feasible-now good. For real people with all their warts, bets promote truth. But for saints, yeah, maybe not so much.

Read the whole story
234 days ago
Newcastle, United Kingdom
Share this story
Next Page of Stories