how we identify these valuable demographic segments

​How OUR AUDIENCE INCLUDES works

OUR AUDIENCE INCLUDES is inspired by Twitter's taxonomy of humans. It is hosted on Glitch, which means it is written in JavaScript on Node.js.

Requests for the front page trigger the random generation of 30 or so demographic phrases. For each phrase to be generated, the algorithm first randomly chooses one of several phrase templates, then constructs the phrase according to the rules of that template:

Every bracketed word represents a function within the algorithm that will yield a semi-randomly generated word or phrase that is grammatically compatible with the bracketed word. [People] could become "humans", "aging hipsters", or "socialist vampires", while [indicator] might be "household income", "mien", or "average volume in cubic centimeters".

Most of the templates share structural elements, so for coherence I tried to write template-filling functions such that they would be usable in multiple templates. I tuned the lower-level functions such that they return phrases of varying complexity: the function responsible for generating e.g. [Items] can conceivably return both “cars” and “ACME brand whiskey in bulk.". Given that in the more elaborate templates there are multiple dice rolls (so to speak), this assures a healthy variation of amusingly short phrases ("millennials," "aging refuseniks") next to magnificently absurd bullshit ("immortal cats whose anima suggests they may be users of Ezaki Glico brand military surplus bric-a-brac").

To avoid too many Mad Libs-style nonsequiturs, I had planned to generate the word lists entirely off the top of my head, but I ultimately ended up salting my lists of verbs and adjectives with a proportion of random entries from wordnet, which is refreshed on every request. This does result in occasional nonsequiturs or ungrammatical entries, but the results are spontaneously amusing at least as often as they are obscure.

The fundamental humor of an individual entry is meant to be alternatively gentle or fanciful, so in assembling lists of types of people, I tried to avoid anything inherently mean-spirited or insulting. If an entry ended up funny at the expense of the indicated group, I wanted it to come from what the described people were doing, or from interplay between the identity and the action—or failing those, to be absurd or eerie rather than mean.

The inclusion of anything obviously motivated or self-contained as a notional "joke" (e.g., say, “Trump supporters”) felt somehow lazy, and counter to the overall tone of humor I was aiming for. Also, the more Discourse-y a list entry, the less likely to age well it felt (although why I’m concerned about disposable, machine-generated jokes aging well is a fair question, to which I have no answer.)

There are admittedly a few politically-tinted terms (e.g. "communists") that sort of break this rule, and the excuse I have for that is: I thought they were funny enough to justify it, and in any case they're all terms just as likely to be applied positively to one's self as they would be pejoratively to others.

There are a few Stupid Nerd Shit references (e.g. "Self-Sealing Stem Bolts," "Weyland-Yutani") but mostly I tried to avoid obvious inclusion of anything that might lead to boring just-two-things-isms.

Interesting/funny intransitive verbs and verb phrases were by far the hardest category of word to think of. I'm sure there's some cognitive/linguistic reason for why this might be. Free-associating the word lists was the most cognitively interesting part of the whole task, and looking at them now is a fun exercise in seeing how some groups of terms came obviously from the same train of thought, but the provenance of others is obscure. Brains are strange.

I wanted some way to keep track of the most apt/funny phrases, so clicking or tapping on any demographic on the front page results in the text being stored in a simple NeDB database. NeDB is extremely handy! The /affinities url just dumps everything that's ever been clicked on, while another URL provides the text of a single demographic from the database for twitter bot purposes. (Here the author is seen hastily adding "Joke API Design" to his résumé, right under "Full-Stack Editorial Engineer").