VSI header image 1
April 24, 2015 @ 1:06 pm

Book Review: On Intelligence

While I was at Cold Spring Harbor Labs a few weeks ago, I picked up a copy of On Intelligence, by Jeff Hawkins, formerly of Palm Computing (remember the Palm Pilot?) and a science writer named Sandra Blakeslee.  It's a ten-year-old book, which hasn't made the spash it deserves. At the time I seem to remember Hawkins being dismissed as a Silicon Valley pretender, a dilettante, and I never got around to reading it. Fortunately, he's on the CSHL board, and they keep copies of it in their bookstore.

The most basic idea of the book is actually quite similar to Darwin's insight on how to organize biological diversity. Darwin realized that each species was not a separate thing. Every species on the planet is related. Once we take time into account, we can see this organization, the Tree of Life into which each individual species can be plugged. That reorganization of biological diversity has been incredibly valuable, because it allows us to generalize responsibly from one species to another based on how far apart they are in the tree. Insects like fruit flies are a great model system for genetic disease because they share all those mechanisms of sexual reproduction with us. Asexual bacteria are good models of more basic processes like energy metabolism, but less good for other things like cancer.

Hawkins had a similar insight. The 52 Brodmann areas of the human cerebral cortex are not separate computational modules, separately evolved to perform specific computations, the way generations of neurologists and cognitive scientists have imagined. Those cortical areas are all related. They all do the same thing; they just do it on different inputs. Visual cortex performs the same operations on the signals from the optic nerves that auditory cortex performs on the signals from the ears. That's a powerful idea, for the same reason that Darwin's idea was powerful. If it's true (and I suspect that it is true), we'll be able to leverage discoveries from one field of neuroscience to others, and we can learn which discoveries will generalize and which discoveries will not. That's big.

If we think about the biological evolution of the brain as an organ, this makes perfect sense. Reptiles don't have a cortex, and neither do birds. They have a 3-layered hippocampus, which is similar, and probably the ancestor of the cortex. As far as we know, the 6-layered cortex arose one time in the history of mammals, not dozens of times. New cortical areas are built out of more or less the same stem cells as old areas. Simplicity suggests that they should do the same thing.

Hawkins goes further. He says that our larger model of the cortex as a computer is wrong. The cortex does not compute equations, as most model-builders assume; it's not a processor. The cortex is a memory system. It computes by analogy to its stored memories. The goal of the cortex is to predict its next input, much as the goal of the much older cerebellum is reduce the error signal in movement.

Try touching your nose. The error is the difference between where your finger currently is and where you want your finger to be. If your cerebellum is healthy (and you haven't been drinking alcohol), that error signal will be very small, because at the end of the movement and at every point along that trajectory your finger will be exactly where you expect it to be. If you put on a pair of prism glasses, as I do with my Governor's School students every summer, and shift the visual input by 10 degrees to the right, you've created a big error signal, which the cerebellum will eliminate over the course of a few minutes. Your initial throw of a football will be 10 degrees off. Each throw becomes slightly more accurate until things feel normal. When you take the glasses off, the error signal returns, but in the opposite direction, so that the cerebellum has to zero out that signal, too.

Hawkins believes that the cerebral cortex does the same thing, not in 3 spatial dimensions but in the high-dimensional spaces of similarity. As we move up the cortical hierarchy those spaces become more and more abstract, more and more compressed. For instance, the 3 spatial dimensions might be compressed into a single abstract movement profile, so that I can recognize the way my wife moves across a crowded room.

OK, interesting – but what does this mean for teaching? It means that, contrary to our fervent hopes, we can't simply download that abstract representation into the brains of our students. It has to be built up through repetetive experience. We may be able to catalyze that process by offering good examples and guiding the generalization, but we can't simply present a rule or a concept or a generalization and drop the mike the way we imagine that we can. That's a hard truth to accept. (I'm right now listening to a student complain about “trick questions” at the table behind me at Starbucks.)

What's worse, students are really adept at repeating those rules back to us, so it can look as though they've got it when they really don't. In fact, even they believe that they've got it, because they can repeat the rule back. It feels as though they have zeroed the error signal, because we're all measuring the wrong signal, the easy convenient signal of verbal match rather than the important, buried signal of analogy match. Measuring analogy match requires presenting novel examples and measuring a much more complex error signal. The Venn diagram of new information overlapping with old experience becomes very complex, like Luisa Hiller's diagrams of the pangenome of related bacterial species.

The dirty secret? Many teachers don't understand the concepts themselves, so they aren't equipped to do more than the verbal match, even if they wanted to. So we have a lot of work to do. I think Hawkins's book is a good place to start.

Share | Comments
April 10, 2015 @ 3:49 pm

Google is, like, 13

Talking to my newly mustachioed son, with his increasing levels of testosterone, can be very frustrating. It's not as though we have nothing in common. Clearly we are both geeks. We watched Benedict Cumberbatch and Keira Knightley in The Imitation Game on Friday for Family Movie Night, in celebration of the beginning of spring break and the complete lack of homework. We both enjoyed it, but it seems we got entirely different things out of it.

He sees details and I see context. As an example, he picked up on a particular piece of the soundtrack, a bit of electronica that repeats but changes with each repetition. I didn't notice that at all until he pointed it out to me. I was too busy thinking about the overall game theory of deliberately NOT using most of the information they learned, in order to hide the fact that the Germans' Enigma code had been broken. The movie estimated that they shortened the war by two years and saved 14 million lives.

Coincidentally, this past weekend was the 30th anniversary of the premiere of Max Headroom, a science fiction series about an artificial intelligence. It's especially relevant to the Turing Test because Max was not in fact an artificial intelligence. He wasn't even a digitally created avatar voiced by an actor. His makers pretended that he was digital, but in fact actor Matt Frewer was just wearing prosthetics to emphasize his already angular features. Then they post-processed the footage of Frewer with jerks and loops to make him stutter, and to make it look digital.

In 1984, that was enough to fool a lot of people, even people in the film industry. They manipulated people's expectations and changed the rules, much as when a chatbot calling itself “Eugene Goostman” passed the Turing Test, partially by claiming to be a 13-year-old Ukranian boy to lower the judge's expectations.

"Developed by PrincetonAI (a small team of programmers and technologists not affiliated with Princeton University) and backed by a computer and some gee-whiz algorithms, "Eugene Goostman" was able to fool the Turing Test 2014 judges 33% of the time — good enough to surpass the threshold set by computer scientist Alan Turing in 1950. Turing believed that by 2000, computers would be able to, through five-minute text-based conversations, fool humans into believing that they were flesh and blood, at least 30% of the time. Depending on whom you talk to, Goostman's achievement is either a huge turning point for technology, or just another blip."

What does this have to do with communicating with my son, an actual 13-year-old boy? Our different expectations make for misunderstandings. For instance, I'm trying to tell him about the Art of Noise (who did a video starring Max Headroom) as my favorite example of electronic music, and he gets hung up on the definition of electronic music – more specifically on the definition of looping, which he has defined as much from his own limited personal experience as anything – and derails the conversation trying to correct me.  Part of that is the as-mentioned testosterone, which has a vested interest in making me wrong more of the time now.  My dad-pedestal is not as tall as it was.

***

I got to be Eugene Goostman, sort of, this weekend during a workshop at Co//ab on simple tricks for Search Engine Optimization, for manipulating the algorithms Google uses to rank web pages. Those algorithms are kind of dumb. They search keywords, and the more times the keyword appears in the post, the higher the score. Now, high school English teaches us to vary our vocabulary, to make things more interesting by using synonyms, which the bots can't see (not to mention figures of speech that only work by analogy).

Now we have an interesting evolutionary problem. There are now two constraints in this virtual environment, what biologists call a trade-off. The bots want things to be simple and repetitive. Human readers want things to be more complex and variable. A simple compromise will probably not please either audience. So what do we do?

One solution is to split the post into text for humans and metadata for the ranking algorithms, what ecologists would call niche partitioning. The problem with that solution is that it's too easy to insert irrelevant keywords into metadata to raise the rankings artificially. Every porn site knows that. So nowadays the keywords have to be embedded in the post. I'm wondering if the bots are sensitive to the color of the text, meaning, “Can I put background-colored keywords into empty spaces at the end of a paragraph or something?” That'll be experiment #1.

Another solution is to teach the algorithms to process language in the same way that humans do. We seem to be making progress on that (see my previous post about Jeff Hawkins and the cortical algorithm), but it doesn't work today. Much like computer graphics did not work all that well in 1984. Not much I can do about that one.

A third solution is to rely on humans, like they substituted Matt Frewer for an artificial intelligence in the Max Headroom story. This is what sites like Reddit use, where the readers upvote or downvote pieces of content – posts, links, comments, whatever. Google has apparently taken a shortcut to this kind of exhaustive peer review by simply counting the number of comments on blog posts, a fact that our program director was hoping we would take advantage of to collaboratively raise the profile of all the Co//ab startups. That I'm happy to do, as long as it doesn't take too much time, and nobody tries to censor me. Constraints again.

Share | Comments
April 7, 2015 @ 8:28 am

What’s a Hackathon, Anyway?

I spent the first three days of last week at Cold Spring Harbor Laboratories, at the Data Carpentry Genomics Hackathon.  This might seem odd, since as a rule, I don't personally consider “code” to be a verb.  I am very interested in active learning, though, and I've been delighted to find that one of the rules of both Software Carpentry and Data Carpentry is don't let people go more than 10 minutes without typing something. 

 

Reminds me of my high school typing class, actually, where we had these special flip-chart activity books that stood up on the desk so that we could keep proper posture while we did the exercises on our IBM Selectrics.  That class, believe it or not, may have been the single most practical thing I did in high school.  Touch-typing, even at my slow speeds, has freed me to write almost as quickly and comfortably as I talk, which has in turn allowed me to take a different attitude to writing than most people do.  I view writing as a tool for thinking clearly about a topic, even before communicating about that topic to other people.  It's like being able to have a conversation with myself.<?xml:namespace prefix = "o" ns = "urn:schemas-microsoft-com:office:office" />

 

Biologists are historically math-phobic, and the emphasis on molecular biology has only made things worse.  I include myself in that.  It wasn't until I started studying mathematical modeling during graduate school that I really began to think of math as a tool rather than as a subject, and the transition did not happen overnight.  In any case, this is one of the attitudes that the Data Carpentry organization promotes, that computers and algorithms are tools that biologists need to master in exactly the same way that pipettes and microscopes are tools that we need to master in order to do our work.

 

Apparently calling something a hackathon changes the expectation to something that is much more active than a normal academic “workshop,” which usually consists of some lectures and, if you're feeling ambitious, some brainstorming.  At the hackathon, after three days we had at least the skeletons of six modules of 1-2 hours each, complete with data sets to use and even assessment questions.   That is remarkable to me, and something that I want to be able to do with my eventual employees, to bang out a custom lesson that's testable in the classroom in less than a week.  Same goals as the Lean Startup Method, now that I think about it.

Share | Comments
February 17, 2015 @ 2:45 pm

Know Your Opponents

I don't re-post or link a lot of stuff from the official BEACON blog, but these two thoughtful commentaries, one from undergraduate Lazarius Miller and the other from grad student Carina Baskett, caught my eye.  Both of them were models of constructive engagement.  Maybe a little more earnest and serious in tone than readers will often find here, but equally honest in intent.

They were particularly welcome after I spent three cups of coffee reading this fascinating but disturbing piece of journalism by Graeme Wood in The Atlantic.
"The Islamic State, also known as the Islamic State of Iraq and al-Sham (ISIS), follows a distinctive variety of Islam whose beliefs about the path to the Day of Judgment matter to its strategy, and can help the West know its enemy and predict its behavior. Its rise to power is less like the triumph of the Muslim Brotherhood in Egypt (a group whose leaders the Islamic State considers apostates) than like the realization of a dystopian alternate reality in which David Koresh or Jim Jones survived to wield absolute power over not just a few hundred people, but some 8 million."
There's an old phrase from Sun Tzu: "Know Your Enemy."  Americans are pretty accomplished at demonizing our enemies, as just about every cartoonist did during World War 2 (even Dr. Seuss!).  We're even better at morphing elements of other people's cultures into our own entertainments, like the cheesy Secrets of Isis kids'  show from the 1970s.
15177-2840-16925-1-isis.jpg
We have trouble with understanding people and groups for the purposes of opposing them.  Understanding a group does not mean agreeing with it, or even tolerating it.  There has to be some distinction between enemies (ISIS / ISIL / those black-flag guys) and opponents, like the people renting out the student center for a creationist conference.  Our two-party politics have lost that distinction.  Scientists of all people should not fall to those same tribal impulses.
***
Coincidentally, one of my best friends from grade school used to be obsessed with those old Filmation live-action shows and built this page:
Share | Comments
January 13, 2015 @ 12:32 pm

Scrupulosity

Yesterday I heard this great interview on Obsessive Compulsive Disorder (OCD) on NPR.  It’s become common, especially among young people, to use the label to describe healthy, high-performing individuals who display higher-than-average anxiety -- about their grades, for instance.  I teach at Governor’s School, and I taught Early College high school students last fall, and they use the phrase commonly.  It’s true that every single one of us has weird, irrational fears.  According to the interview, there’s apparently even a developmental progression of highly unlikely things, from being abandoned as a small child to being completely ostracized and alone as a teenager to sexual perversions to fears of disease and bizarre ways of dying as adults.  Most of us ignore these irrational thoughts, and they go away.

 

However, some people have more obsessive thoughts, or they last longer (variation).  Some portion of those people reinforce those thoughts by dwelling on them, reactivating them in a positive feedback loop, increasing their anxiety level (selection).  All of us have an anxiety threshold beyond which we freak out and behave irrationally.  That threshold varies between people, so that some of us reach it sooner, with less provocation.  I would guess that people who have lower anxiety thresholds are more likely to develop clinical OCD, but I haven’t seen any real research on that.  There is a scale to measure where you are on the spectrum.

 

http://psychology-tools.com/yale-brown-obsessive-compulsive-scale/

 

I took it, and got a 4 out of 40.  It’s subjective, because the questions ask things like how much it disturbs you, not how many times per day the thoughts occur, but it's enough to get a rough idea.  There's a cute bar graph at this link.

http://psychology-tools.com/forum/showthread.php/17745-Share-your-score-on-the-Yale-Brown-Obsessive-Compulsive-Scale-%28Y-BOCS%29 

One of the ways to reduce anxiety is to do something calming, to perform a behavior, which may or may not have any rational connection to the stimulus.  The important thing is that the behavior immediately reduces the anxiety level.  At first the connection might be completely accidental, but in neuroscience, that doesn’t much matter.  Simple repetition strengthens the connection, like Pavlov’s experiments, where he paired meat powder with a bell to “teach” his dogs to salivate when the bell was rung alone.  The same mechanism could produce compulsive behaviors.  The essentially loopy nature of behavior in general can even lead to people doing their compulsions to try and prevent the obsessive thoughts, rather than waiting for the obsessive thought to occur on its own, thereby actually triggering the obsessive thought.

 

Anyway, it was a great interview, and in particular the author said

 

you get trapped in this loop where you're desperate for certainty, and you can never get it. You're always checking.

 

This struck me when I heard it.  It struck me again as I was walking home from Panera this morning, when I saw a soggy, discarded tract lying on the sidewalk:

 

ARE YOU

100% SURE

That You Will

Go To Heaven

When You Die?

 

It’s actually that demand for certainty that bothers me, and in my anecdotal experience, I see it about equally often in hard-core atheists and in religious people.  Likewise, a quick Google Scholar search showed no evidence that religion fosters OCD, though there is a form of OCD that centers around religious and moral issues called scrupulosity.  Maybe they aren’t looking closely enough, at the proper magnification.  Maybe it’s not religion in general, but looking at that tract, and remembering my own fundamentalist upbringing, I have to imagine that at the very least there are particular churches, particular spiritual leaders, who act as amplifiers, increasing the existing tendency towards OCD in their followers as part of a feedback loop.  Or selecting for those vulnerable followers, for those people who are especially looking for certainty, who would pick up that tract and go to that church to find out more, rather than saying “Huh, that’s interesting,” and recycling it like me.

Share | Comments
December 2, 2014 @ 12:07 pm

But Temporary and Partial Comfort is not Alliterative! (and sells less well)

I spent the Thanksgiving holiday weekend in Kentucky, ignoring the whole Black Friday foolishness as much as possible, although there were a couple of good social science NPR pieces on the phenomonon that I shared to the VSI page on Facebook. Instead we went to the woods and whacked dead things with sticks. We hung out with friends, old and new, and played games. Easy and cheap.

The long drive home on Sunday was much improved by a slew of old Escape Pod stories and a well-done radio documentary on the Disney theme parks fromStudio 360. They interviewed park insiders, well-known critics like Carl Hiassen, and well-known Disney afficionados like Corey Doctorow. Most of their criticisms were about the company's tendency to sanitize everything. Corey D. was more nuanced. He didn't ignore the criticisms and contradictions, but said that is was in fact the tension between light and dark, sweet and bitter, commerce and art, that makes Disney work. It led to some really good rear-view mirror conversations with my son in the back seat as we drove.

This show reminded me of a particular interaction I had with Len Testa, one of the hosts of the WDW podcast (now the Disney Dish podcast?) and my very first guest on this VSI podcast. I was being snarky, in one of my “evil control-freak corporation” moods, and he pointedly reminded me that families spent years, sometimes, saving up to visit these parks. Life is hard, he seemed to say, and who was I to say which diversions from that difficulty were good and healthy, and which were stupid and childish?

I never even tried to answer that implied question at the time, because I agree that being narrowly judgemental is rarely helpful, and because I read comics and play fantasy RPGs for fun (glass houses if ever there were any).  In the car on Sunday, though, when the documentary compared Disney to a religion, a couple of things clicked. One was a CD I had picked up at the KentuckyArtisans Center in Berea, of a style of vocal-only, call-and-response singing, evolved in Old Regular Baptist churches where they generally didn't have written hymnals (and probably a lot of the people couldn't read). I'm not a church person, but I understand the need for comfort, for shelter from the storm. That CD did not inspire those feelings in me, but I could see where they might in people whose experiences differ from mine. The other thing that clicked was my Buddhist readings and practice. It's not the comfort of religion, or of Disney, that bothers me. I like comfort. It's specifically the promise of perfect and permanent comfort that bothers me. That has always struck me as simple fraud, as impossible and as irresponsible as Peter Pan's refusal to grow up.

Science fiction has that same strain of techno-utopia running through it, sometimes. These days I'm more frustrated by the opposite bias, that the future is only scary. My favorite authors and thinkers balance promise and peril in surprising ways. That's why I joined the Center for Science and the Imagination. They're trying to shift that balance towards optimism, not by eliminating critical thought about our problems, but by applying critical thought to try and solve those problems. Disney made honest (but creepy) efforts to do this.  Another thing that I learned from the documentary was what EPCOT stands for, which I'd never thought about.  Rather than being just a mall, the original vision was an Experimental Prototype Community Of Tomorrow, a living museum of the future, where people would live full-time.  They tried again with the town of Celebration.

Personally, I'm waiting for the Disney fundamentalists to break off from the larger group and build their own walled compounds, where they stockpile weapons and snack foods.  Damn.  There's that snark again.  That's not helpful.

Share | Comments
October 19, 2014 @ 3:10 pm

An opportunity to cooperate

This is not simply a cheap call-back to Episode 6: "Iron Dad," although if you go to teen inventor Chase Lewis's YouTube channel, he is indeed a fan of Tony Stark, though his helmet is the movie version, not the animated Armored Adventures version.  Below is a message forwarded to the Greensboro Science Cafe Facebook page from ... his mom?

***

"PLEASE VOTE for Chapel Hill teen inventor Chase Lewis' Emergency Mask Pod invention. It is in the national finals of an XPrize Challenge! XPrize teamed up with Disney for this "Big Idea" Challenge. 

Chase will be speaking at the Science Cafe's October meeting.

The judging will be done by a panel, but 20% of the total score will be determined by a public vote. Fortunately, the voting is from today through Sunday. You can vote once a day. The link is: http://bit.ly/ZtMhF3

The prize is a trip to L.A. to meet with XPrize and to attend the Disney premier of "Big Hero 6." Chase wants to win because he wants to meet with the XPrize people to tell them about his Inventing 101 Now idea, which was the subject of his TEDx talk in September."

Share | Comments
September 27, 2014 @ 12:12 pm

Being Wrong is a Good Thing (as long as we learn from it)

Agents of S.H.I.E.L.D. started up again last night (at least, on my DVR it did).  The baddie du jour was Carl “Crusher” Creel, the Absorbing Man.  Like Colossus becomes solid metal, Creel can become whatever he touches.  In the comics, he's an Avengers-level badass, like DC's Metamorpho but dumber.  Metamorpho at least took high school chemistry and knows how to apply it.  Creel just bulks up and punches things.  In the show his transformations were usually partial, to limit him enough that the SHIELD team could kind of handle him, which was probably a good decision.

In the comics, Creel's powers were magical, the result of some weird Asgardian potion, if I remember correctly.  In the show, they wisely just said, We don't know how he does it.  A technobabble explanation would just alienate the scientists and the continuity geeks, and the muggles everyone is now trying to recruit to watch all this geek-porn that's being produced wouldn't care, anyway.

Much of science fiction is only flavored with science, anyway.  Some of that is cynical, but there are people who are interested in the real science but not trained as working scientists.  I think I met one of those last night – Piper Kessler, who writes the web series Frequency, which is a romance between psychic time-traveling lesbians.

There are certain non-scientific phenomena that get updated whenever new science appears to maintain plausibility.  It's the same emotional issue to deal with, but the metaphors used to explore the issue shift over time.  Aliens are one.  They used to be fairies and angels and demons; and then they were extraterrestrials, or time travelers; and now they're often extradimensionals.  Psychic powers are another. 

In the earlier part of the 20th century they were imagined to be based on electromagnetic waves, like radio (thus blockable by hats made of tinfoil).  Marvel Comics, using a particle physics metaphor, invented the psion, a subatomic particle that psychics could manipulate with their minds.  They also had a villain called Graviton, who was based on the real theoretical subatomic particle of the same name.  I don't mean the particles are real (noone's found them), I mean that the theory was proposed by professional physicists in a serious way, not as a jokey plot device by a bunch of comics guys.  Confusing, I know.  Nowadays psychic phenomena are more often presented as phenomena somehow related to quantum physics, except on a human scale, like in this video, "Alice in Quantumland," last year's runner-up in a contest run by the journal Nature. 

And that's a good thing, as far as it goes.  Stories are a really useful tool for thinking about unintuitive phenomena.  Not as good as mathematical models for prediction, but much better than nothing.  Probably better than mathematical models for helping people think about the meaning of unintuitive phenomena.

Anyhow, Piper Kessler's psychics are based on brainwaves.  When many thousands of neurons are spiking in synchrony, those tiny electrical disturbances add up into larger voltage changes that can be measured on the scalp through a machine called an EEG.  When they're out of synchrony, the spikes average out.  Different behavioral states tend to have different synchronous frequency bands.  For instance, deep sleep is usually marked by average frequencies of less than 4 cycles per second, called delta waves.  It's kind of like watching the clouds from an old-school weather satellite.  You can see big things like hurricanes and the jet stream but not what's going on at street level, not traffic patterns within a single city.  Modern spy satellites, of course, have much better cameras and can supposedly read license plates.  You can read a lot more detail about EEG on Wikipedia.

Ms. Kessler's metaphor is unrealistically fine-grained.  For instance, during her talk on Thursday about her fictional psychics she described telepathy is the same frequency as happiness, exactly 30Hz.  Clairvoyance (displayed by Meredith Sause's character Claire) uses a different frequency, time travel another, and memory wiping and magical healing still others.  That is sort of logical, as far as it goes, but it doesn't take into account most of the complexities of EEG, like it varies with different locations in the brain, or that those powers don't have anything to do with one another.

And you know what?  That's OK.  Many years of educational research have shown us that our undertanding of the physical world is based on internal mental models.  Those models are never perfect, but they get better with experience.  Making wrong predictions and updating the models based on the results of our experiments is one very important way of learning about complicated topics.  Ms. Kessler played around with her fictional model in a logical, narrative way, and she predicted that if her character Deena the dentist practiced trying to control her brainwaves, she'd be able to achieve a particular frequency and learn to time travel.  What she actually got (at first, at least) was telepathy.

That particular hypothesis was in fact correct.  Not the telepathy, but the ability to shift the dominant frequency band of brain activity.  It's been shown in controlled experiments that trained meditators can enter different attentional states more or less at will.  These states display specific frequency bands.  There's even a New Agey biofeedback-based video game called Wild Divine that's designed to teach people to manipulate their attentional state.  There's also some evidence that trained meditators have more control over their moods, that they can deliberately generate positive emotions, which shift activity to the left hemisphere of the brain.  So Ms. Kessler was right about the phenomenon, but wrong about the basis (location, not frequency alone).

Again, that's good.  That's all any of us science types are ever doing, telling stories to ourselves and our students about our data.  We just have special cultural rituals and tools for doing it that are more effective for the specific purpose of making predictions.  Those rituals and tools are not very useful for people outside the trained “priesthood,” and they aren't very good for coming to terms with difficult situations on an emotional level.  Stories are better for that.

Share | Comments
September 21, 2014 @ 9:30 pm

Episode 68: Book Review of ‘The Man from Mars’

"This series presents information based in part on theory and conjecture. The producer's purpose is to suggest some possible explanations, but not necessarily the only ones, to the mysteries we will examine."
--official disclaimer, In Search Of
That show gave me nightmares.
Fred Nadis's website (his other book Wonder Shows looks pretty neat, too)
A neat and extensive wiki built by college students at Georgia Tech (check out their Reptoids coverage)
Also search this blog for "conspiracy"
A Brief History of Lovecraft's Necronomicon

http://sacred-texts.com/nec/

The X-Files
Quote Game: Scully or Blanche DuBois?  ( I got a C -- 7/10)
In Search Of, with Leonard Nimoy (I never knew that he was a replacement for Rod Serling)
00:0000:00
Share | Comments | Embed | Download | Plays (Loading)
August 24, 2014 @ 1:00 pm

Extinction Level Events

I spent 5 days in Washington, D.C. last week with my family. We biked the mall to see the various monuments. We selectively toured some Smithsonia (or is it Smithsonians?); check the Facebook page for that album, including the Hall of Human Origins exhibit. We hadn't really inherited anything recently, so we slept on the floor of one of my wife's grad school buddies. Oh, and my son attended the World Pokemon Championships, not as a contestant, but as a fanboy journalist, hoping to score some footage for his YouTube channel. Of course, being twelve, he got so excited by being there that he forgot to record anything. Maybe there was some inheritance displayed there, after all.

My wife had gotten Huckleberry Finn on CD for the car, but we blew that off for a bunch of year-old Escape Pod episodes. Two Ken Liu stories were particularly resonant with my current life stage.  "Good Hunting" was about the ways in which people (including magical ones) adapt to a changing environment. In some ways it was like Larry Niven's The Magic Goes Away, but richer on an emotional level, and deliberately remixing several subgenres, from fantasy to steampunk.  "Mono No Aware" was likewise sad, what with the destruction of human society and such, but there too, it was the father/son stuff that had me all verklempt.

My son is now producing his own comics, his own vodcast on YouTube. He's never listened to VSI (except when I made him), and he's only reluctantly helped me with a few episodes, but it seems that he was paying attention, that he was watching me work, and that something was inherited there. A transfer of some sort took place. Of course, like the three generations of the Wyeth family mentioned in this exhibit that I saw at the National Gallery of Art, it wasn't a perfect transfer. Grandfather NC was mostly an workman, an illustrator of other people's adventure stories; father Andrew was your classic tortured fine artist, painting the same things over and over again; and son Jamie seems more laid back. My son's channel is only about games, and he's already more invested in the process of production, in making the pieces look and sound fancy, than I ever had time for. In fact, he spent part of his Saturday morning yesterday at the Apple store, learning some new editing tricks for iMovie.

So, with the tenure-track position having crashed (in slow motion, and not without warning, rather like the asteroid in "Mono No Aware"), what's next? First, obviously, survival. That's what this fall's adjunct class at Guilford College is about. The household economic machine needs just that much lubrication to keep humming along on one full-time salary while I'm building up my new company, Agnosia Media, LLC.

More importantly, from your point of view, what about VSI, the blog and podcast? Well, the BEACON funding is at least temporarily gone. I've been promising them that I'd start accepting donations like Escape Pod does. Podbean has some different revenue streams I might be able to take advantage of, as long as I don't run afoul of any NSF rules. Content produced with government money is supposed to be public domain, if I remember correctly. Donations to produce new stuff should be OK, but I'll have to check on that.

Short answer, we are not extinct. Expect ongoing goodness.

Share | Comments
Loading Plays
404Episodes

Subscribe

  • add to podbean
  • add to iTunes
  • add to google

Receive by e-mail for FREE!

  • rss2 podcast
  • atom feed

Categories

Archives

Following

Followers

Quantcast