Friday, February 26, 2010

When social search gets personal: ChatRoulette, Peerpong, Aardvark


A couple of items in the news this week got me thinking about the social search space. But not from the usual angle. We have all heard about ChatRoulette by now, and of the random acts of human exhibitionism that take place there. Well, apparently some of those random encounters were too good to let go of. And so some visitors have taken to the a new Missed Connections to find people they met on ChatRoulette.

Cue "I still haven't found what I'm looking for." Yeah, by U2. And maybe that should be "I still haven't found who I'm looking for."

This is a great example of unintended social outcomes, and how in openly-designed social systems, users will find ways of addressing what's not handled by the application. Since ChatRoulette is anonymous by design, we can already anticipate that one of its social facets will be identity. Anonymity and privacy get users in, but on some occasions they will want to find each other again. Anonymity is coupled to identity (who). Just as random is coupled to specific (what).

Missed connections may be where users have to go now to try to re-locate people they met on ChatRoulette. Or ChatRoulette could accommodate this need in the future. It would then in effect be providing more than just random encounters — and would be providing a kind of social search.

Another item in the news this week related to social search was PeerPong, which received funding. (Disclosure: I consulted to PeerPong early on.) Described now as Aardvark for twitter, Peerpong matches user questions to twitter users who may be able to answer them. As aardvark uses one's social network to distribute questions and solicit answers, Peerpong uses twitter. (As you probably know, aardvark was just acquired by Google.)

The social search issue here is obviously different from that happening around ChatRoulette's missed connections. But they have one thing in common worth mentioning. It is: what happens when social search gets personal?

Social search tends to suggest traditional search supplemented with search results qualifed by social relevance. Using, say, social algorithms and user input (ratings, votes, etc) to deliver complementary results. Social search as regular search plus long-tail social data mining.

But there's another kind of social search. This kind, of which aardvark, Peerpong, and missed connections are all examples, uses people to solve search problems. We usually call these question/answer services. And in this area, success can be more elusive. Where in algorithmic social search there is one user experience issue, in question/answer services there are two.

Both questioner and answerer must have a satisfactory experience for the service to work. In fact the service really hangs on the experience of the answerer. The questioner has an immediate and present need or interest — not so the answerer. His or her motives for participation have to be incentivized or contextualized by other means.

The possibility that social search gets personal can be a systemically reinforcing and, as a user experience, much more compelling (and human) means of solving "search" issues. (Question/Answer services are much more than "search".) But this potential for the social to get personal is also a barrier to use — put plainly, people can get freaked out.

ChatRoulette's social search problem will be reciprocity and mutuality — solved only if both parties agree to re-find each other. Presumably the experience these users had on webcam was enough to take care of trust issues (which is not to say it's free of risk). For aardvark and peerpong, the challenge is relational.

What commitments or obligations to ongoing social search will a user have to another user in the future? Users don't know each other, even if they may be connected through twitter, through shared topical interests, or by social/peer networks.

Context of use can address some of this. By contextualizing search experiences and answer contributions, services like these can reduce the freak factor, using social context then to de-personalize perceived obligations, expectations, and commitments. Context can help reduce user fears of expected future participation commitments. And context can be used to supply alternative incentives to use — game contexts, expertise ranking, and the like. In short, using social to absorb some of the personal.

One wouldn't have thought ChatRoulette would have anything to do with social search. But the random selection of users is guaranteed to produce its inverse as an effect and byproduct. When people connect, algorithms become unnecessary.

Cue U2.

PeerPong Raises $2.8M for an Aardvark for Twitter
Calling All Romantics: Chatroulette Now Has Its Own Missed Connections
ChatRoulette, hall of mirrors
ChatRoulette, I'm watching you (watching me)
Google's Aardvark acquisition: Questions for Buzz?

Labels: , ,

Tuesday, February 02, 2010

Algorithmic authority: critical reflections

In a post late last year on algorithmic authority, Adina Levin compares and contrasts the relevance of social selections and recommendations made in Google and Facebook. She raises the question of the algorithm's capacity to approximate human preferences.

On Facebook's Friend recommendations and use of social algorithms to surface relevant news, a topic of discussion at the time, she writes: "Louis Gray writes that this approach caused him to miss the news that his sister, who'd been regularly posting updates, had had a new baby."

In human affairs, and friendships in particular, algorithms are of course only precariously prescient. In fact and reality, they often "fail." I would like to take a closer look at this failure. What is it, when it is not machine or operational failure, but failure to produce accurate social results?

When algorithmic authority fails
We need to begin with the claim of "algorithmic authority," around which discussion by Clay Shirky and others has been rich. There is some conceptual slippage here. Is the algorithm's authority in question because it fails on occasion? In which case, it lacks authority for being inconsistent and unreliable. Or is its authority in question because it cannot compete with human judgment, or "the sort of social assessment we do every day about maintaining social connections." (@alevin) In which case its failure is an intrinsic flaw, and we should bracket the notion of algorithmic authority with the recognition that its reach and effectiveness is always only partial and speculative.

In addition to the conceptual slippage we just noted around the claim to authority, there is I think some further confusion introduced by the fact that algorithmically-based social search, recommendations, and navigational methods involve a call to action. Namely, I wonder if Adina rightly raises the point but conflates recommendations with their call to action.

Surely, when Facebook makes a social recommendation, it assumes that users will themselves choose whether to connect, poke, or ignore those users recommended for friendship. Most likely, Facebook is using friend recommendations to surface members its users have not yet connected to. Its social algorithms make recommendations, but users connect the loop by taking action.

In other words, authority is not in the claim alone (the friend recommendation, which claims that the user is a potential friend of yours), but in the user's response. The user's acceptance or rejection of that claim validates the algorithm's authority.

Authority, in short, depends perhaps on the user, not on the algorithm, for it is only on the basis of the user's acceptance that authority is realized. It is subjectively interpreted, not objectively held.

For conceptual clarity: the general and the particular
I think "algorithmic authority" conflates two concepts into one idea, making it easy to confuse the argument and draw ambivalent conclusions. What is the authority of the algorithm? And in what cases do algorithms have authority? Those are two separate things. We have a problem of the general and the particular.

The algorithm generally may invoke the authority of data, information sourcing, math, and scientific technique. Those are claims on authority based in the faith we put in science (actually, math, and specifically, probabilities). That's the authority of the algorithm — not of any one algorithmic suggestion in particular, but of the algorithmic operation in general.

As to the case or context in which algorithms have authority, there are many. Adina contrasts two — the algorithmic selection of relevant news and the recommendation of friends based on one's social graph. And there are, of course, many other examples of use cases and contexts in which social algorithms surface and expose possible connections and associations. In the particular example cited for Louis Gray, and in any other particular case, it is the rightness of the algorithm's particular selection that makes a claim to authority.

So we have the algorithm as general method, and context as particular instance. In the former, the algorithm as general method authorizes the claim to authority. In the latter, it is the rightness of the particular case or result that justifies the claim to authority.

Two kinds of normative claim: large and small
Either claim to authority may be recognized and accepted by a user. Either claim to authority may be invested with trust and confidence. And either may likewise fail, but on its own terms: as a failure of algorithmic operations to surface social associations and relations; or as a failure of algorithmic selections to refer to an individual user's interests, tastes, and preferences. The authority of method in general may fail to capture relevant associations belonging to the social field in general. The authority of selection in particular may fail to articulate relevant social facts to the particular individual.

These two kinds of authority, or rather, two claims to authority (for that's really what they are — claims valid only if accepted) correspond to small and large normative claims (Habermas). Normative claims are linguistic statements that use "you should." The call to action in a friend recommendation is a normative "should." Small normative claims are wagered by the individual (to personalize Facebook friend recommendations, something like: "Friend Jimmy, I think you guys would get along"). Large normative claims are referred to institutional authority (to institutionalize Facebook friend recommendations, something like: "We can see more of your latent social connections than you can, and on that basis we recommend you friend Jimmy").

Clay Shirky's "social judgments"
Clay Shirky, speculates on algorithmic authority in a post and manner that exemplifies the point.

Shirky writes about the authority invested in him: "Some of these social judgments might be informal — do other people seem to trust me? — while others might be formal — do I have certification from an institution that will vouch for my knowledge of Eastern Europe."

There are in fact two kinds of trust involved here, each of which may be related to authority. First is trust in the person known. Second is trust in social position or role. This is the distinction between trusting the Clay you know and trusting the professor named Clay.

We tend not to distinguish person and position, but there is again a difference between trust invested in the particular and trust extended to the general. Shirky calls these "social judgments" but in fact the former, being personal, is less a social judgment than a personal assessment. We trust friends not by their reputation but by our personal experience.

(Sidenote: In all matters social media the personal and the social are easily conflated or used interchangeably. My aim here is to cleave the difference in the interest of clarity.)

Shirky goes on to say that "As the philosopher John Searle describes social facts, they rely on the formulation X counts as Y in C." But I think Clay employs a bit of fuzzy logic going from social judgments to social facts. The failed and misguided friend recommendations Adina so rightly notes, and the suggestions made by news sourcing algorithms, are not just "social facts" but are claims to truth.

Searle would himself likely agree that a claim that rests on authority is in fact not a fact but is a linguistic statement. It makes a claim whose validity depends on normative rights assumed by the the authority in general and whose rightness depends on the claim's validity in particular.

Claims made on the basis of authority depend on the audience for their validity as a "social fact." In other words, they are "true facts" only insofar as the audience accepts them to be so. They are not "real" but are simply "valid." (This argument uses Habermas' three truth claims and is an alternative to Searle's concept of truth. If one is going to use terms like judgment and fact, it behooves one to get picky.)

Authority and the call to action
Now, I have suggested that it might not be the recommendation itself, but the implied call to action, that is the problem Adina identifies. Not, in other words, that Facebook recommends the wrong person, but that it recommends they become friends, and that a poke start off the friendship.

As we have seen, the social recommendation is actually a linguistic claim. Its validity is up to the user's interpretation and response. In its form as a navigational or interface element, it solicits a call to action, yes. But its construction is as a linguistic claim.

I had suggested that Facebook surely doesn't expect users to take action on its recommendations unless they want to. So if we don't mind that the algorithm's selection is a misfire, then our response is a separate matter. In fact, what's at issue here is the authority of a claim made to recommenced social interaction.

The algorithm's suggestion not only solicits the user to make friends, but implicitly includes all the social sharing that follows from that choice of inclusion. That is a matter of social action — not just of the selection of information.

But the question of whether the algorithm can play a role in social interaction and social actions is an entirely different matter. For the time being, we simply want to crack the conundrum of algorithmic authority.

A brief note is in order, for I am not just trying to over complicate matters. There is a reason for this complexity. It is, as is often the case in social interaction design, because there are two orders, two levels of analysis, or two register in play. First, is the meaning we might impute to objective and factual information — "authority" we impute to information and data that lays some claim to personal and social relevance. Second, is the meaning constructed socially, or in the social world of subjective meanings. In systems involving users and social action and interaction, there is the pseudo-objective meaning of the information, and the separate world of valid social claims involved in action and communication among people related to each other with different degrees of interpersonal and social commitment.

Information v social action: different kinds of claims
The meanings of statements of fact, those involving what we think of as information, are not in the same linguistic register as the meanings of the claims and expressions of individuals. They are different kinds of utterance, produced (uttered) in the former case by machines (e.g. algorithmic suggestions), and in the case of the latter, by individual users.

Claims made by people relate us to those people, and our responses are a form of social action in which we anticipate and account for the other's response. Actions involved with information can be one-off actions — but those involving other people form the thread we call relationships, no matter how thick or thin.

In the world of web interactions and social media, the call to actions that belong to the system's very social DNA and design will take the form of both human and system messages. And I suspect that it's in the call to action, especially when it implicates social relations (e.g. Facebook algorithmically selected friend recommendations), that what may bother us lies in the action domain of meaning, not in the factual domain of information selection.

Confusion can arise because both system messages such as Facebook friend recommendations and user-generated content take linguistic form, and as such make the types of claims and solicit calls to action and interaction that are possible with language.

The authority of the social
And yet we recognize that systems are socializing themselves in automated and algorithmic ways. And with this trend in social system evolution, many new interactions and activities produce a new kind of social organization. The results of which are disruptive to a wide range of media industries.

Shirky has been a keen observer of this cultural drift. His analysis of algorithmic authority is in keeping with his view that systems absorb and reflect their own use by user populations, producing hybrid social effects in some critical areas of cultural and social organization: trust, influence, popularity, reputation, and so on. For Shirky, this has produced something new: the authority of the social.

"As more people come to realize that not only do they look to unsupervised processes for answers to certain questions, but that their friends do as well, those groups will come to treat those resources as authoritative. Which means that, for those groups, they will be authoritative, since there's no root authority to construct from."

Socio-technical transformations: judgment and authority
We can drill down even deeper, for there are two transformations at work here: the transformation of human judgment to algorithmic sourcing and filtering; and the transformation of the authority of position to the social as validation of authoritativeness. From the use of human judgment to increased reliance on algorithms, and from the authority of traditional social positions and institutions to the authority of open and networked populations.

In moving from the personal and human recommendation to the algorithmic selection, we invest our trust in a system able to source vast amounts of information. Trust is invested in its systemic reliability. This replaces the trust previously invested in an individual's experience and personal judgment.

We trade system complexity and system techniques for the reasons and thinking of a person. The algorithm now replaces human judgment on the basis of its much greater ability to weigh and value multiple sources of news and information. And it does so by necessity, for we could not evaluate social information as voluminous and that captured in the social networking world.

But trust in systemic reliability differs fundamentally from the criteria we canvassed above and by which personal or institutional (small and large) authoritative claims are waged and validated (accepted or rejected). Here, again, separate concepts combine to form "social judgment" and "social authority." And as with algorithmic authority, one involves a transformation of the general form of authority, and one, the particular form of judgment.

The general form of authority is assimilated to the concept of a social whose sheer mass, volume, and speed of information sourcing justifies its making a claim to authority. And the particular form of judgment, which is personal and individual, is transferred to the social which is a hive-like collective mass subjectivity.

There's no need here to break down the conceptual moves used in forming the concepts of social judgment and social authority. For they follow the operational moves used in forming algorithmic authority. What is compelling, and interesting, is the manner in which these new concepts borrow from their predecessors, dragging along with them the validity and authority established long ago, while accruing meanings and associations.

Conclusion
I do not question whether this is just the conceptual handiwork and word smithing on which industry experts and analysts rely as food for thought and bread on the table. Clearly these concepts resonate and describe adequately some of the transformations — technical, social, and cultural — that social media participate in. I do question the risk of taking these concepts literally, their implications uncritically, or their assumptions without reflection. For it is then all to easy to fashion a house of cards on subsistence logic and its subsiding logical fault-lines. The consequence of which is to sometimes misread and misapprehend how social works, to overlook what users do and choose, and to falsely attribute social results and practices to the technical infrastructure on which they depend.

Labels: , ,