Archive for: September, 2011

Eye Contact: Say What?

Sep 30 2011 Published by under Uncategorized


The eyes are the windows of the soul, they give away our emotions and our interest. Maintaining good eye contact is an art. It is commonly misunderstood that more eye contact is always better, but this is not always the case. There is a delicate balance to maintaining your direction of your gaze into the other person's eyes and breaking that contact for brief intervals. Too much eye contact can be intimidating, too little may imply a lack of interest or lack of self-confidence.


Maintain nice, friendly eye contact with breaks.


Don't be a 'Mr. Stare' or 'Ms. Glare.' No one likes to be peered at, it can give the wrong impression.


When your eyes start to shut and you have to prop up your face, you are definitely giving off the vibe that you are not interested.


Alert but aloof! It's ok to look away but don't forget to check back in or you could seem like you are lost in a daydream or not listening.


As you can see, how and when you use eye contact can be an important form of non-verbal communication. It can be used to express interest and open a channel of understanding between two people. However, the length and intensity of eye contact can change its meaning. Also, different cultures can interpret eye contact in their own unique ways and there are certain socially accepted norms that vary across the globe. Seem complicated? It can be.

Eye Contact around the World:

For us in the U.S., eye contact is always said to be important. If you maintain good eye contact with someone, it displays you are confident and in control. But how much is too much? Or too little? Have you ever met those people who take eye contact to the extreme? They stand a little too close, giving you a long, purposeful gaze that can last minutes without release? At what point does 'good eye contact' cross over to just plain creepy?

According to frequently referenced experiments by Argyle and Dean, direct eye to eye contact should typically last 3 to 10 seconds, anything lasting longer than 10 seconds sets a mood for uneasiness and anxiety. Also, the amount of eye contact has a direct relation to the physical distance between the two people. Therefore, if you are sitting or standing close to someone, there seems to be less direct eye contact then if you were more than a few feet away from them which makes sense. It is logical that if you were five or six feet away from them that you would want to relay your feelings of interest and attention to what they are saying through well-maintained eye contact. On the flip side, if you are in very close proximity to someone, intense direct eye contact could be taken a variety of wrong ways including flirtation or the desire to intimidate, dominate or overrule that person. This might cause anxiety and produce an uncomfortable feeling to the recipient.

The etiquette of eye contact in France, Spain, Germany and other European countries is similar to the rules we discussed above in the United States. However, in some cultures, direct eye contact can be considered aggressive, confrontational, rude or even disrespectful. This is traditionally the case in Asian, Indian, African and most Latin-American cultures. In these areas, avoiding or using minimal eye contact with others can be thought of as a sign of respect and keeping peace with those who 'out-rank' you such as your elders, teachers or bosses. In the Muslim world, a man maintaining eye contact with another man is a sign of trustworthiness and honesty, however, women and men usually use minimal eye contact or avoid it altogether. Travelers to other countries, particularly on business, should take it upon themselves to learn about cultural norms in the country they are visiting so that they can better communicate with people there without any misunderstandings.

Eye Contact and the Brain:

But why are the eyes so important, why are they so tied to our emotions? The answer lies in the brain. Social interaction and communication areas of the brain are stimulated and set off by direct eye contact. These areas are often referred to as the 'social brain.' It is almost like direct eye contact is the key which turns on the socializing engine in the brain. And this is something that is innate in humans, we are built this way. It has been proven that "sensitivity to eye contact is present even in newborns"  and "neuroimaging studies have also demonstrated that eye contact modulates cortical activation in infants as young as 4 months of age.". It appears that babies seek out and direct their attention towards a person who is giving them direct eye contact, finding it pleasing to them.

In other species, such as dogs, direct eye contact is interpreted as a challenge and can result in aggressive behavior but in humans, it is favorable. "Some researchers argue that the depigmentation of the human sclera, which does not exist in other primate species, has evolved for effective communication and social interaction based on eye contact."  So obviously, we are built to detect and seek out eye contact from others and it can automatically stimulate a whole host of emotions based on how it is interpreted by the perceiver.

Concluding Thoughts:

Practicing the art of good eye contact is a nice idea but it is always best to be yourself. Read the situation and use the other person's behavior as feedback to determine how much eye contact to give. Do what feels comfortable for you and what seems to be most comfortable for the other person. Eye contact demonstrates self-confidence, a willingness to listen and is an important part of body language so practice using it to better convey your emotions, ideas and opinions in a positive, friendly way. Doing so can take your communication and social interaction skills to the next level.


( The original version of this article was previously posted on my blog).

No responses yet

NYC Sci Tweetup/Story Collider

Sep 28 2011 Published by under Uncategorized


I went there expecting to see some big brains, instead I saw big hearts.

Last night I was fortunate enough to make it to the NYC Sci Tweet-up/Story Collider event and it was amazing. My friend Dr. D accompanied me. We both had to work late seeing patients last night but despite all odds (and perhaps some well-placed speeding along the highways of Long Island) we made it to beautiful Park Slope, Brooklyn and even found a parking space (albeit a tight one, I think I have now invented the 80-point turn).

Past the lovely, brownstone lined streets we strolled, smart phones glowing with our navigation app in hand until we found Union Hall. It was as I had pictured it, the lights on just enough to cast a warm, romantic glow on the many leather bound books. This place looked like a library you could drink in, a science geek's dream. We found our way to the basement and entered the small, bar room with staging area and a mic. It was crowded but a good kind of crowd, you could feel the energy in the room, filled with excitement, gathered for a purpose. No one was decked out in sequin tank tops and ready to bust a move, we were there because we all shared the same interest, passion and maybe the same obsessive, borderline-stalker-ish love. Science.

Did I mention we were late? Over an hour late? I was disappointed that even though Dr. D and I had prepared to get out of work early, I got hung up on my last patient of the night (isn't that always the way when you are scheduled to get out early?) and that had caused us quite a delay that my 80 mph speeding could not make up for. Upon flashing my credit card at the door to check in to 'will call' and confirm we had paid for tickets, I was ecstatic to learn that we did not miss the entire event and that it was currently intermission. Three acts left to go, Bora Zivkovic, Anna North and Carl Zimmer.

I am not going to review their stories, I don't really have the authority to do that and I fear if I try to summarize them too much it will not project the same as it would coming directly from them on that night. I guess the saying really does hold true, "you just had to be there."

What I can tell you is how I felt listening to them.

Their stories were not really the stories I expected to hear. I guess I assumed they would talk about how they first got into writing and offer some inspiration and wisdom on how others can do the same. They did talk about that but they spoke about much more. They delivered raw, heartfelt emotion sharing little stories not about how they became the writers that they are today, but about how they became the people that they are today. Giving us a glimpse into their very personal lives and how their trials and tribulations ignited something inside of them. Their passion, their courage, their heart.

Now as I said, I do not have the authority to review them in any way, I do not even know them and I actually just for the first time got to introduce myself to Bora Z. last night after a couple months of guest blogging with him. But I was pleasantly surprised to see they shared the same breed of passion that I do. Sometimes in life, we go through things that forever change us. They soften us to how delicate life is and how precious our individual perspective on things are. It lights a fire in us and we feel while the minutes tick away and the days roll past and the months turn into years that we need to share that perspective with others. That unique way we see things through our own filter of experiences. Sometimes things aren't always as they appear to us, sometimes someone you once put on a pedestal you later find out really didn't belong there. Sometimes you root for the underdog and through perseverance and passion the underdog becomes the champion. Sometimes you lose someone very close to you and after that, you look at life and everything in the universe in an entirely different way.

Everyone's story is different. The thing that we all had in common at Union Hall last night was that we were willing to share it. Whether it be on stage, or writing, or with each other. I think we all knew how important it was to not let your perspective go unnoticed. You have to put yourself out there and share it.

And with that sappy post, I promise tomorrow I will get back to vision science! Thank you Scientopia for letting me share my perspective on your blog for these next two weeks!



No responses yet

One Juicy Lie: Carrots

Sep 27 2011 Published by under Uncategorized


Sometimes when I am speaking with patients I feel less like an optometrist and more like I'm on the tv show MythBusters. The only difference is instead of setting up cool and elaborate science experiments to debunk urban myths, I'm delving into medical reference sites, journal articles and history books to find the facts behind the misunderstandings of medicine that have perpetuated throughout the years.

One of the most commonly encountered questions I get asked is if eating more carrots will eliminate the need for glasses. Although carrots are nutritious, consuming more of them will not magically reduce your refractive error or the need for a prescription. And they will not give you super-fantastical visual capabilities either.

I do not fault the public for believing this myth, because it was planted on them on purpose! Yes, the 'carrots and perfect eyesight' myth actually began as a lie, a cover-up if you will, decades ago during World War II.

The Royal Air Force had a pilot, John Cunningham, with an exceptionally good record of shooting down enemy planes at night. They even nicknamed him, “Cat Eyes” and boasted that it was his love of carrots that gave him his super-human night vision. The British government then began a whole campaign about carrots, which was one food in plentiful supply during the war, celebrating their nutritional value and saying that they would help improve your night vision during the blackouts which were frequent in WWII.

In reality, the Royal Air Force was trying to hide the fact that the UK was the first country to successfully employ RADAR (RAdio Detection And Ranging) and that was giving their pilots an edge while shooting down bombers at night. The British theorized that carrots improving the RAF’s night vision would not raise suspicion among the Germans, carrots enhancing the eyes was already a theme in folklore.

This little fib about carrots is over a half a century old but it still sticks in the minds of people today hoping for a quick fix or cure for their blurry vision. While a Vitamin A or carotene deficiency can lead to night vision problems (and high doses of Vitamin A are used to treat the most common form of Retinitis Pigmentosa), there is no benefit to a normal individual consuming an excessive amount of Vitamin A.

And with that I say, "MYTH: BUSTED."

For more yummy facts on carrots and for my references, you can check out the original article on my blog.


2 responses so far

The Woman behind the Cat Eye Glasses

Sep 26 2011 Published by under Uncategorized


Hello Scientopia! I am so excited to be here blogging with you for the next two weeks! Today is my first day and I'm a little nervous but I'm just going to throw caution to the wind and put myself out there.

Perhaps I should start by telling you a little bit about me. I'm an optometrist and self-proclaimed science geek. I just love to absorb and learn everything I can about just about anything. I started blogging in 2008 but really got serious about it this past year. When I first started science writing I was doing it more for patient education. As an eye doc, I found myself answering the same questions over and over so I started to blog about the FAQs of vision science. As time went on, I began to blog more for me! I loved researching topics and found myself up curled up with a cup of coffee and my good friend PubMed quite often. After a while, I started to realize that is what blogging is all about. The joy of learning and sharing what you've learned with others.

But maybe even more than that, it is about just plain learning FROM others. Since I joined Twitter, I have been following some of the most amazing and awe-inspiring peeps in the science writing world. I love logging on everyday and seeing what they have to say and what they're writing about that day. I've learned a great deal! Tomorrow, I am actually going to the NYC Science Tweet Up/The Story Collider event (which I will definitely blog about on Wednesday, don't you worry). I can't wait to see some of the familiar faces from Twitter in person and maybe even get the courage to walk right up and introduce myself to a few!

Through this blogging experience, I hope to teach you a little bit about the amazing world of vision science. If you close your eyes for a moment now [close and count to five then open, I'll wait], you can begin to appreciate how valuable your vision is, from the moment you first opened your eyes when you woke up this morning to the moment you lay your head down and close them tonight, your vision is one of your keenest senses, helping you to enjoy and navigate through your world. I find it fascinating, amazing and wonderful and I hope you will too!

Feel free to introduce yourself, comment and ask questions throughout the next two weeks and add me on Twitter or Facebook. I look forward to getting to know you Scientopians and I'll leave you with your first eye trick of the day!

Beauty or Brains? Do you see Monroe or Einstein in this picture?

If your vision is fully corrected or perfect, you should see Einstein. Walk back and stand more than 12 feet from your computer, now who do you see? It should be Marilyn.



12 responses so far

Reposted: Chemistry For The Zombie Apocalypse

Sep 26 2011 Published by under Uncategorized

Figure 1: Zombie apocalypse

Whether a zombie apocalypse is scientifically possible or not, it is better to be safe than sorry.  Silly? Perhaps... but even the Center for Disease Control (CDC) wants you to think about zombie apocalypse preparedness.

Being prepared for a zombie attack has never been easier.  There are zombie-centric groups like the Zombie Research Society and Zombie Combat Club, numerous zombie survival books and online resources, and even a conference (ZomBcon) that features zombie survival programming.

We know how to avoid and kill zombies, keep from becoming a zombie, stockpile for a zombie attack, pick a location for our zombie-free compound and thanks to The Walking Dead, how to chemically camouflage ourselves among zombies.

The zombies of The Walking Dead pick-out their living animal snacks primarily by smell. They've drawn to noise and use their sight, but these zombies know they've found dinner through smell.  As The Walking Dead's Andrea said, "They smell dead, we don’t.  That's pretty distinct.”  That observation became a plan and The Walking Dead's Grimes got the group of the living chemical camouflage by, ummm... direct harvesting (this episode is called "Guts" for a reason).

Figure 2: Eau de Death factory

After watching this episode, my first thought was, "There has to be a better way."  The Walking Dead method (TWDM) for producing said chemical camouflage leaves much to be desired.  TWDM requires a corpse, puncturing tools, extensive personal protective equipment (PPE) and a strong stomach.  In addition, TWDM is ill-suited for mass production, something a zombie pandemic would necessitate.  If smelling dead will save lives, we'll need a lot death cologne.

Fortunately, smelling dead doesn't require dealing with the dead.  By selecting the right chemicals, along with suitable production methods, large quantities of  a Eau de Death could be made.

For that rotting smell without the fuss and muss of TWDM, two classes of organic compounds - amines and sulfhydryls - offer good bad-smelling chemical candidates.  Two foul smelling amines, cadaverine and putresine (Figure 3), are good choices as they are produced early in the body's decomposition process.  To the amine duo, the stinky sulfhydryl methanethiol (Figure 3) adds a smell of rotten cabbage or eggs.


Figure 3: possible ingredients of Eau de Death

For large quanities of our rotten trio, biotechnology could be just be just the ticket, with bacteria doing the heavy lifting.  The use of the bacterium Escherichia coli (E. coli) to make large amounts of cadaverine and putrescine was presented in a May 2011 article in the journal Applied Microbiology and Biotechnology.  E. coli can produce cadaverine from the amino acid L-lysine by having enzymes trim a carboxylic acid group off L-lysine (Figure 4).  The same trim job can yield putrescine from L-ornithine (Figure 4), with L-ornithine being the result of a slice-and-dice of the amino acid L-arginine.  There is also a second route to get putrescine from L-arginine without the intermediate L-ornithine.

Figure 4: Producing cadaverine and putrescine


Figure 4: E. coli

Our stinky sulfhydryl could also be produced using our bacterial factory workers E. coli (Figure 5), as research published in Plant and Cell Physiology showed. Modifying E. coli to produce a specific enzyme will get us methanethiol from the amino acid L-methionine via a more elaborate route than those that yielded our foul smelling amines.

To get our E. coli staffed Eau de Death factory up and running, we'll need to debug these biotech production routes (see A and B).

As with other perfumes, the Eau de Death recipe must be perfected.  Should other stinky chemicals be included?  What is the proper ratio of stinky compounds to achieve the right rotting flesh smell?  Should we have a celebrity spokesperson?

Our chemical camouflage is off to a good start, but we have a lot of work to do.  Now is the time to start!  We certainly can't wait until we're in the midst of a zombie outbreak.  Anyone who has seen The Walking Dead, or any zombie movie for that matter, knows mid-zombie apocalypse isn't the best time for this type of research and development.



Many thanks to GertyZ, who rightly pointed out during a discussion on this post that any Eau de Death should contain a sulfhydryl.

Figure image attributions:
Figures 1-2: clip art from Officer 2010
Figure 5: image from

19 responses so far

The farewell post

Sep 25 2011 Published by under Uncategorized

And so we come to Sunday, and my two weeks here are done. I hope I have been entertaining.

If not that. then amusing.

If not that, then not too distracting.

If not that, then somewhat bearable.

If not that, then not affecting your daily life too badly.

If not that, then at the very least not causing long-lasting nightmares, mortal aversion against all mathematicians, and a vague dislike for all that is Finnish, plus psychosomatic problems resulting in the unglamorous life of a compelled serial plus sign defacer.

If not even that, well, then I hope I have not created a disturbance in the information structure of the Internet, leading to Google going bing! and collapsing, dragging every other search engine into overheating and similar collapse, triggering a netwide Dark Age of ignorance and forgetting, of link rot and whispers at fading threads of a lost land of something-tube Eltuberado where there were videos, and crenellated forums besieged by hordes of guttural griefers baying for blood; a world where pagecounts plummet and millions-viewed videos lie in silence; a world of darkness, grue and Zalgo; a world which, when the lights of searching eventually rekindle, is illuminated as one where nothing remains but hostile meme-gnawing trolls and bland obsidian forts brimming with the necrotic spamspawn of Hell; only this where the world electric used to stretch, ephemeral, beautiful, and endlessly variable in shock and delight; a new world, maybe, but with the old net lost to bit rot and indifference, with bing but no boing, and no certainty save this: that idiot guestblogger, he caused this all.

Because that would be bad.

If, on the contrary, what you have seen has been entertaining and you'd like to hear more of this Finnish cretin, well, I blog at and as Masks of Eris; I twitter as the same, I am not on Facebook because seriously I don't have the time so I just read Failbook instead; I have a site called Mirrors of Eris, or "perversions of religion, philosophy and conspiracy theory" (infrequently updated), and I draw a doodle a day over at Lemmata. Also I spend too much time on my computer, and have done enough link-pushing for one paragraph.

And I might as well end this with a few doodles. Here's one about a new groundbreaking innovation in medical science:

And one about how some words just stay with me and feel like they have a sinister double meaning, a meaning hidden behind the placid facade of everyday existence:

And here's a question: If people are willing to believe in mediums, why don't they demand this?

And one just because:

This has been a fun two weeks; my most brain- and heartfelt thanks to the Scientopia Rulers generally (The Keepers of Scientopia? The Blogfathers and Hivemothers? The Scientopianites? The Scientopiaries? The Members? I'm not good with formalities.) and to Arlenna especially. Thanks to the commenters, the readers, and the NSA. Farewell, all! See you again! Don't eat the yellow snow! Don't lick cold iron! Don't trust the bears of the forest! Other plausible but false Finnish farewells!

Yup; I think that was all the bad jokes and other ideas I had left. Bye; I'm going back to homebase.

2 responses so far


Sep 24 2011 Published by under Uncategorized

"Hello, Mr. Publisher."

"Good morning, Ms. Mathematician."

"Well possibly, depending on the results of my book proposal."

"Yes, that is the reason I called you here. Ms. Mathematician, your proposal for 'Differential Equations Are Super Fun!' looks a lot like a textbook."


"Actually, if I am not mistaken, it is your textbook, 'Differential Equations for the Freshperson', with a different cover."

"I did not see a point in rewriting already proven results."

"I'm afraid what we have here is a conflict between the mathematical and... financial senses of 'proven'. Could you add some, uh, elements of entertainment?"

"Will try, Mr. Publisher."

* * *

"Was the second draft better, Mr. Publisher?"

"Yes, surprisingly."

"Well, I asked my husband for comments. He reminded me we have a twelve-year-old... well, boy, but the sex is not necessary for the result. He looked the book over and gave critique."

"That brave child. Now, the book as it stands represents a perfect balance of exposition and entertainment for your child---"

"Yes, I am sure it does. The proof is on page 212."

"---but the more general audience does not have the same tastes."

"The general audience does not like dean jokes?"

"Er, no."

"There goes the plot, and Appendix C! But fine, give me an average person and I'll rewrite the book to his, her or its tastes."

"Erm. We're going for a broad audience here..."

"So the book should be rewritten so that its balance integrated over all audiences is as good as possible, weighted by the size of those audiences? Good being defined relative to each person's preferred balance of exposition to entertainment?"

"Er, yes. Well put. Very well put, actually."

"I'll get back to you on that, Mr. Publisher."

* * *

"Mr. Secretary, what is that?"

"A letter from that mathematician, Mr. Publisher. Containing... 'A possible appendix. Containing a proof that the insertion of multiple insider jokes for multiple audiences does not generate broad cross-audience appeal. Also submitted to Bull. Anal. Prod.', whatever that is."

"Oh, great."

* * *

"Your new revision is excellent, Ms. Mathematician! You are a genius!"

"That's what my graduate students tell me. They are liars, though."

"Ah, but, we have a problem. It turns out, and I should have noticed this before, that optimization over the audiences is not enough. We must optimize for the tastes of the booksellers, also."


"And for our marketing men."

"Oh dear."

"And for moral guardians and other easily irritated twits... I should have mentioned this, but you can't go and prove the nonexistence of God in a general audience popular mathematics book!"

"But I thought it was a worthwhile corollary of the Easter Rabbit Theorem... oh, fine. I'll get to work optimizing over optimizations of audiences."

* * *


"Mr. Publisher!"

"Ms. Mathematician? What are you... it's the middle of the night! What are you doing here?"

"You never told me about the time variable!"


"The optimizations! The audience preferences shift over time... oh, this is going to so complicate things you better hope I can approximate the quasidiscontinuities with smooth functions... I'll be back at you, later."

"Fine... would you just close the window after you... ah, never mind."

* * *

"So this is the perfect moment?"

"Yes, Mr. Publisher; by my calculations, of all the days of all time, these two days are the most auspicious moment, audience-, media-, watchdog-, bookseller- and management-wise, to launch the book in its present configuration."

"Excellent. I'll go and do that in the Publishing Big Button Room. Please enjoy the complimentary published author chocolates in the meanwhile."

"Okay, sure. There he goes, and so I am left here alone describing the situation in this small room with a huge window in a skyscraper... wait, this is one of the things normal people don't do. Ah well. Mmm... these are good chocolates. Good chocolates, as far as I know. And all thanks to a carefully planned book. Oh. I forgot to ask if it's reasonable to assume all people are identical spheres of uniform density... but eh, why not. If people are not as elegant as a model, so much worse for the people."

One response so far

The third referee: a short story

Sep 22 2011 Published by under Uncategorized

Because there isn't enough lab lit (or int lit, for integration, derivation and the related math stuff?), I shamelessly repost a short story of mine, titled "The third referee".

Some parts are smoothed over for flow; some are just the result of the author being an ignorant young fool; but this all, generally speaking, could happen.

* * *

Some philosophers of science have a habit of fondly, dreamily saying something like this: "You know, science understands much — but we, the understanders, are just jumped-up apes. Who knows, maybe there are things in this world we will never understand. Things that are just too complex, too weird, for our brains to process! Things that'll leave us scratching impotently like a rat against a glass wall. I don't mean Lovecraftian 'flee howling into the safety of a new dark age' stuff or religion, but just cases where reality is too big and complex for short-lived, ill-communicating hominids to grasp. Maybe we'll be faced with that, some day in the far distant future."

As I said: some philosophers of science say that.

I am not a philosopher of science.

I am a research mathematician; hello; that is as far as I usually get before people start to politely disengage. (Do biologists ever get told conversant X was no good in biology and really, what's biology really useful for? ("Beetroots are more useful than square roots, ha ha ha!") Then again, do biologists get gazed up at like they're Grecian gods of dreadfully inhumanly obscure abstraction?)

(Of Grecian gods I'm not sure if I mean Hephaistos the Gimp, or randy Zeus, lover of women, animals, inanimate objects and natural phenomena, all in the same carnal sense. People that look in awe at mathematicians don't seem to know either; the judging gaze vacillates between a celibate and some unnatural perversion, and then turns away.)

I am a research mathematician, and I do not like those science-philosophers because they are full of caca. "Some day in the far distant future"? Let me tell you about Project Caca.

That's not the actual official name, of course; funding is difficult to get, so the name was sesquipedalian and polysyllabic in the tradition of "Monitoring systems for time-related reduction of thermal damaging in caffeine intake manifolds", i.e. waiting for the coffee to cool — the name had a precise and meaningful meaning, and there were maybe twenty people in the world that could understand it without looking for the definitions.

Luckily one was reviewing it, and bang! Me and two graduate students were funded for two years, more depending on results.

In three months we had a draft of a paper.

In six months that draft looked like a Gordian Möbius knot.

In nine months that paper had bloody tentacles spouting from the fractally weirdening insides of it, metaphorically speaking.

The problem was this: in mathematics, it's difficult to know if something is true. Suppose you want to know if all Putz functions have the Nebbish property. Either you find a big technical circumloquacious proof they are, or you find a non-Nebbish Putz function.

Then again, you can assume a non-Nebbish Putz function and show that with that assumption the structure of mathematics crumbles to the ground, beginning with one equalling zero; that's a contradiction, and hence there is no such thing and all Putz functions have the Nebbish property. Or, finally, you can find a scrotum-shrivelingly horribly long proof that, yes, we have a non-Nebbish Putz though we don't know what it is.

Finding, building and checking the proofs can take months, years, lifetimes; I get the dry heaves thinking of all the things that are true but might need Fermatian and Sisyphean labors to actually prove. And often that proof is a single winding tortuous (or torturous) path that, if a single cobblestone is missing, falls apart.

A counterexample or a contradiction, on the other hand, might take just as long to find — or might take a lucky five seconds, coming like lightning out of clear sky.

And usually you can't know which it will be: True or false? Years of work or a single lucky insight? Within your grasp, or beyond it? Three choices of two; eight possible combinations.

You expect me to say we worked for nine months on a proof and then someone published a one-liner counterexample, right?


Expecting, then, something related to the science philosophy I began with, you probably then expect some Gödel shit, don't you? That we found a statement that is undecidable? Or that we found out mathematics as currently done is complete and has no undecidables, and we found something worse, an inconsistency, showed that a statement is both true and false?

Get out of here.

It's on Wiki fucking pedia that mathematics as currently done, ZFC plus the Axiom of Choice, has undecidable statements (there's a list of examples!), and also cannot prove itself consistent or inconsistent. Finding an undecidable would be nothing new; finding an inconsistency is impossible.

Why is it always either Gödel or Fermat? There's so much interesting mathematics, and people always go for swordfighting magic mysteries or Gödelian pseudo-quantum shit. I feel like a doctor that's never asked anything except the name of that bridge between your nostrils, and the identity of the funniest objects found in the human cloaca.

No, what I am about to tell is worse: in twelve months we had a paper.

Said much like one would say, "in twelve months we had a peepee".

Gödel and Fermat being exhausted, you probably expect a shaggy dog tale now, that we had found nothing and this is all a sad tale of the difficulty of getting funding, and of the unpredictability of mathematical research and ends with me living penniless under a bridge chewing on a graduate student's shinbone like a troll.

Ah, the predictability of mathematical narratives gets my goat sometimes.

No, we had a paper; not nice, but publishable. A proof — two different proofs actually. Both bristled with inelegant assumptions, and led into Quasimodo-like siblings of already known results in a more general case. Let me explain.

Suppose the known case is this: "If you have $900, you can get a 1963 Ford Galaxie with it." That's a real-life theorem I suppose; I ride the bus.

Our result was something like this: "If you have C credits in currency X with the doubling property, one pee-harmonic black goat of the woods and a set of rainbow candles with a manifold Q, you can get a F(X,C,Q) Goatxie with them". Where F is a function that looks like a hedgehog with a Hölder enema.

We sent the paper to be reviewed by a mid-rank journal; mathematics being mathematics "mid-rank" means the same kind of an impact factor as physics journals which publish high school science club analyses of fart gas, with pictures. (Not bitter; just sour.)

I myself wasn't sure what the fuck our result actually said or did; my graduate students understood even less.

That's the problem with mathematics. As a graduate student of a particular field you're in the dungeon of Nethack with a tiny flickering feeble finger-Maglite to guide you: you can peer at details but you have no idea of how things fit together. As a result when you prove lemmas you're doing fine, but when you try to write an introduction to a paper your advisor boggles and, having unchoked herself, tells you what you were actually doing all those months. (The equivalent comment from a physicist might be, "What do you mean, 'mapping the streets of Bucharest'? This is predicting water channeling on Mars!")

Then you get your Ph.D., and expect yourself to be magically transformed into a higher being of pure understanding and energy, like Bruce Willis and Carl Friedrich Gauss rolled into one, so that you can just point at a problem and say: "That's the one. Let's roll."

Not quite so.

And so I and my two graduate students sent to a mid-rank mathematics journal a paper that proved something that was hopefully new, surely horrendously inelegant, and probably, possibly, hopefully worth publishing.

The journal sent the paper out for anonymous peer review to see if we were full of shit; this is standard practice and not something that happens to just my papers, thank you very much.

We went on; and in the remaining year of the grant got four papers done. One of them was nice, and one actually good. The other two went into the Great Hural Journal of Mathematics and Yurt Sciences, and into a conference book bashed together in memoriam August Legend Pseudonym. Also, got one of the graduate students elevated from the Mount Doom-ish slopes of thesiswork to the Lengian plateau of Ph.D., with its associated new woes and insights. (Such as the human-corpse eating cult of interview committees; again, not bitter, just sour.)

With the other graduate student, we got into an agreement that it might be best to aim for a thesis defence the next year: the same procedure as every year.

Then, a year having passed, I wrote to the mid-rank journal and inquired, politely, about our paper. (A year is not that much in this business; the last time a mathematics paper was rushed out with great fanfare was when Grak discovered the number eleven.)

A day later, the response e-mail... do you think we actually use paper? What century are you living — wait, no, I'm not getting drawn into the "when the millennium turned" spat again. I got punched for that once already; the last time I'll try outreach while waiting for my kebab and fries.

A day later, the response e-mail came: the editor was very much sorry for the delay, but there had been complications: the referee had died.

Knowing that August Legend Pseudonym, a person who (to continue the metaphor above) defined and named (after himself, naturally) the 1963 Ford Galaxie, had died recently, I said "Hot diggity!" and felt uncertain.

On one hand, it would be nice to think old Pseudonym could have read the paper, could have liked it, had he had the time.

On the other hand, suppose the paper was schlock, dreck, muck, something that'd come back with a three-line comment showing a shorter, simpler way. That's what every mathematician dreads.

No, scratch that. What every mathematician dreads is a single question mark, next to something you blithely assumed would of course be true. Something that, when you write it open and begin checking, turns into a leak in the bathyscape Trieste, with the Mariana Trench pissing in: something that is so badly, inescapably, unavoidably WRONG that the entirely paper crumples into a red singularity of shame and blips out of existence.

The thought of that possibility, and Pseudonym seeing that, written by me, a young, up-and-coming researcher, would have been... well, not disastrous because mathematicians as a rule are too detached to be malicious, but fucking embarrassing.

But no; Pseudonym was dead, and the referee, Anonym though possibly also Pseudonym, was dead too. The editor said the paper had been sent to a second referee after the sad news, and he would send a reminder to make sure there were no more delays.

I forwarded this to the graduate student and the new Ph.D., sans commentary on the pants-wetting nightmares of authorship, and forgot about it.

A week later an e-mail dropped from the editor: I got ready for a rejection, and read with amusement, then with vague unease and shame, that the second referee had died, too. They were going for a third, and very sorry for the delay but surely I understood.

Out of morbid curiosity, and in a conference a week later, I made inquiries. (Not a conference in the foreign parts, so no cheap beer and exotic sights — but then again, no danger of a rubber-gloved math-flunking yokel sodomising you for fun and security. They always seem to pick, one, the guys with the biggest paws, and two, me.)

I made inquiries — which was easy, Pseudonym having died so recently, so I had a conversation starter — and tried to find out who the other dead referee had been. Morbid curiosity, nothing more. Besides, our circles (to use a mathematical allusion) are so small and compact that most referees, needing to be people who know shit from solid research, are people you sort of know. Meaning you've read their papers, seen their presentations, and in one unfortunate case, seen them nude and arrested after making drunken suggestions to the mayoress of Poznan at a posh and lush academic reception.

I found out there had been a second death, indeed: a tall, thin woman I may just as well call Noma-de-Guerre; a mathematician that spoke loudly, though with an abominable eastern accent, waved her hands like a scarecrow on speed when she spoke, and had a brain that would read your thesis over once, call it trivial, and be entirely right, from her own Olympean viewpoint.

It's scary to run into prodigies. Scarier still when they say they're not special; they just work hard. Makes you feel like a worm; a worm in the blaze of glory of a goddess of your shared ethereal domain; but still a worm.

She had killed herself; that was the story. Not because of any crap about geniuses being unstable, or too good for this world, or that being the price of brilliance; all that is slander made up by worms like me, unwilling to look up in adulation.

No, Noma-de-Guerre had been courteous, stable, polite, and restrained as long as not talking about her love, our love, the goddess Mathematheia; there was no reason anyone knew, had heard or could reasonably guess for why she had applied for a membership in a gun club, taken three lessons in safety, and then put a pistol to her head and said these last words to the safety attendant: "I apologize for the mess, but smaller calibers would not be certain to kill."

In addition to that (and a badly slept night for all of us), I gleaned this: Pseudonym had died of a stroke. Old man, too much stress, understandable really. Who of us could imagine being so active at such an advanced age? Which led to an angry scene, this happening in a pub as it did: a tense, drunken Norwegian function theorist, ten years Pseudonym's junior, took this as a hint that his mathematical virility was flagging. That was defused with a lot of quick talking; the night ended with the Norwegian weeping he had not gotten anything big done his whole life, though he had wanted to; we either comforted him with sympathy and admissions and fears of the same, or (in a few instances) were too young and arrogant to admit we would just as likely be the same, eventually.

The third referee did not coincidentally drop dead; I was, in an obscure way, rather relieved. His verdict was: Do not publish!

Not what I'd hoped for; but much better than the editor writing "Hey, the third guy died too! Do you want a fourth?"

I muttered a few curses, hit forward (that always helps), and after a quick consultation, sent the paper in for a second journal, a sort of lower-mid-rank journal, slightly above J. Dept. Xerox: if they published it, the paper wouldn't be seen by much anyone, but I really wanted to know if there was something so bloody wrong with it, and I really didn't want to do that through the jinxed mid-rank journal and that damned snooty dismissive no-comment third referee.

Thus into the Journal of Local Heroes it went; and either because they mean business, or because they mean business in a different sense (i.e. are unprincipled money-grubbing hacks with referees from the St. Unread Academy of Approval), the article was approved for publication in a month without a single comment save one about typesetting.

So, so much for my faint wish of substantiative critique; I cursed again, forwarded the news to my two co-authors, and turned at other business.

In time, I added a pre-print PDF to my homepage, and to the open, sub-rosa and probably not really legal PDF depository homepage of the greater research team on the subject. (If academic papers could be monitored for copyright violations, you wouldn't need to worry about education in the prison population. Hell, we could organize a whole university on the inside!)

This and more other business I was then yanked from, a few months later, by one more e-mail.

Well, less a mail and more an extended caps-lock-heavy rant which I would have deleted, had it not come from a university address in Bulgaria.

It's a delicate balance between being called "a shit-puking devil" in the first sentence, and being addressed out of the blue by a remotely familiar colleague of the same general area of interest.

Smelling collaboration (well, one needs to be hopeful) I read the rant; then re-read it; then made a few inquiries.

The Bulgarian was my third referee, the snooty dismissive one; and, one, he had just been forcibly held in a mental institution for a few months after a total shrieking raving mental breakdown, and two, he was not happy with me.

Figures, I thought: he has to have his breakdown just after rejecting my paper. Why couldn't he have had the giggles, accepted it with tittering praise, and then went total ga-ga?

As I re-read the rant, something like meaning slithered out from between the boulders of badly translated and occasionally incoherent abuse.

Returning from his forced vacation he had found my paper, which he had thought he had rejected, squashed and obliterated... published in the Journal of Local Heroes, with not a letter changed. More specifically, he had found the paper because of finding in his inbox links to it, links wrapped in questions and layers of puzzlement from his local research group. Then, with no pause, he had flown into this incandescent rage, and extruded this mass of abuse towards my e-mail address, me being the corresponding author and thus the traditional target for comment.

At that point a post popped into my mailbox, from the graduate student — he had been cc'd the same abuse, and wanted to ask what it meant.

The ultimate meaning that slithered out, and that I did not forward to my student, seemed to be something like this: I was a stupid shit-beast for publishing such a loaded article, and had no idea what destructive potential was hidden within it, in an ignorant aside of the whole.

I imagined the air being rent open, the vengeful ghosts of past surging forth and Nazi zombies rising from lakes in Bavaria — then shook my head and tried to puzzle out the exact nature of this "destructive potential".

I could not see it.

It certainly was not an application; we mathematicians have a special grumbling, grudging vocabulary for that, and my Bulgarian referee had used none of that.

Hence I took the article, sent it to my own advisor of long ago (young beliefs in omniscience die hard), and asked her if she saw anything weird in it.

The next day the phone rang, and I was treated to a half-hour harangue of livid audial abuse; my olden advisor had not liked what she had seen. After shouting myself hoarse with "Why?" against her torrent of doom, she finally told me the reason.

I had not let loose the demons of hell through some trick with dimensions and measures.

I had not discovered an application of Jacobians as weapons of mass destruction.

I had not broken mathematics with some Gödelian uppercut.

No, nothing like that.

I had just written, and gotten published, and into circulation and already widening notice, a paper with an innocent, oblivious remark in it that — for those with the eyes and the wits to see — proved that all the problems and questions of our shared field of study reduced to three lines of dismissal, or to a single sentence —

Trivial business; has nothing of further interest in it.

You can expect some abuse when you accidentally show the whole subject collapses into another which has already been solved and exposed to death. Sod philosophers of science and their talk of what we may never understand; understanding can cause funding problems too.

One response so far

Measurement is important

Sep 21 2011 Published by under Uncategorized

Went to hear a dean talk-talk today; was 30% inspired, 30% indignant and 40% terrified, as usual. Also doodled; no, this is not an euphemism for inadequate bowel control.

Doodled, that is, drew, a scale for a most necessary machine.

Now I just need the machinery, the voice recognition software and the lot of wires and chips. Possibly a klaxon, or one of those Geiger ticker things. (I'm a quality concepts innovator, me.)

Some obvious questions, answered ---

Q: "Evacuate immediately"? Like, the premises?

A: Well, if that's what you call it. I call it inadequate bowel control.

Q: Can you give examples?

A: Notice that there are nine rough categories, divided into three supercategories. The most bogolicious of these is exemplified by David Icke, people who find Noah's Arks, the History Channel, perpetual motion machine makers, and any administration above department level. Prolonged exposure at this level is a health risk (increased risk of quality and quantum) and a wealth multiplier.

Q: What about the other two supercategories?

A: The lowest supercategory is mostly empty.

Q: Why is "Normal" above "Medium"?

A: Because that's how the things are.

Q: "Replace meter"? Does this mean there is no zero level?

A: Well duh.

One response so far

Talking animals

Sep 20 2011 Published by under Uncategorized

Note: Why yes, this is a repost from Masks of Eris a month ago, but I think this might amuse the audience here. Possibly in the sense of "Ha! The mathematician fails in all the sciences forever!"; possibly in other senses, too.

Sometimes I wonder if it would be nice to write books for children.

Not ones that the child will a decade later realize were vast metaphors for drug addiction and suicide... well, not entirely those ones.

Just weird and not dull ones.

The problem might be, for each page I'd finish there'd be three pages of MST-ings where something adult suddenly happens.

My father used to tell stories, when we three children were young, stories to get us to quiet down for sleep... stories with Aunt Organic-Waste-Basket and various characters, all anthropomorphic sausages of varying brands, in them.

Childhood is weird.

Below is my essay on what kind of children's stories I would write. After reading it, you might conclude it is best I do not.

* * *

My personal idea for a best-selling series of children's books (okay, cribbed from me and dad joking around, a long time ago) is the Adventures of the Gangster Squirrels.

Or, actually, the Gangster Squirrels are the bad guys. They blackmail and steal and don't even fear the Human People. First book: Gangsterioravat ja sarjapurija; The Gangster Squirrels and the Serial Biter; I don't know what the plot is about or who the characters are.

Also there may be evil pigs; I've got the perfect name for their shuddersome boss: Kärsimys. A Finnish word that may recall kärsä ("snout"), especially on a pig, but is just capitalized kärsimys, "suffering; intense, terrible and lasting agony". That's one baconmaker you don't want to make unhappy.

Er, but wait. If I introduced human-sentient talking pigs and squirrels, just what kind of a horrid world would that be? Think of it as an alternate world of science fiction: boom! all mammals are sentient now.

Turns out carnivores are not nice. How would you feel about a human tribe that tries to eat you?

Turns out humans are worse. There are entire species that are held captive by humans, robbed of their young (chickens), carried away and butchered (pigs), abused in a parody of their maternal reactions (milk cows), or subjected to involuntary labor (horses).

And don't even talk about dogs, the drooling harlequin hordes of endlessly varying genetic perversion, the happy coward lapdogs of the human-colonial oppression regime.

The idea that humans are sentient and have a language too, but never notice their animals are the same... is just too horrible. (Also highly implausible.)

So what then? An animal enclave deep in the woods, with nothing known of humans except old, dark rumors? A peace of indifference between the herbivores, and a common hostility against carnivores. (Among whom, "fox shall eat no fox"? Or "Chicken people, bring us five of your young every week, or the small bear tribe shall come and utterly destroy you all. Accept or be destroyed. We do not bargain with meat.")

And what does "sentient and with a language" really imply? Tools? Houses? Clothing against the elements? How much can a pig without opposable digits, without a pair of limbs that are off the ground, actually do? And with animal lifespans, how much room is there for intelligence --- if wild pigs live for 25 years, and squirrels for 16, that probably implies something about the culture, and the transmission of culture. (Then again, after assuming sentient squirrels with a language it might be silly to assume a normal lifespan, but hey, of such details are stories made.)

If mammals (or say "big enough animals") are sentient, would those that live the longest grow to be the smartest, the most well-knit and cohesive community, the most able to retain and exploit inventions, and eventually the Lord of Creation?

Swans live for a century. (I'm just pulling these numbers off a seemingly reputable list.) So do carps. Tortoises live a century, or several centuries. Imagine the intellectual development of a human being that has several centuries of time to learn and grow. Now imagine a race of such creatures... with protective armor!

If there are enough turtles, they will rule the animal world.

This, of course, assuming there are enough turtles. Probably not, because in the great fashion of children's literature I'm thinking about the rural corner of Finland I grew up in. (Swans, yes, occasionally; but not too many turtles.)

(What does it mean that animals are smart? Would Bucephalus have thrown Alexander off in exchange for Persian oats? Would the geese of the Capitol have taken bribes? Would Spartacus have been a ram? Or is intelligence, in the world of the story, a recent, local development? Then consider the trauma, in the recent Rise of the Planet of the Apes movie, of Caesar thrown among the "normal" monkeys, who to him were a horde of gibbering, screaming heavily developmentally disabled people; people that looked the same but had... nothing in there.)

(There's no obligation to ask why; but one is forced to ask "what then?", or end up with a weak and unsatisfying story. Which, mind you, is crazy business for a talking animals story, but I have too much free time.)

This may be an unfortunate case where fantastic racism or speciesism really is justified: it does not seem far-fetched some species will be smarter, just by brain size (squirrels and pigs, come on); and the lifespan alone will make some cultures more expansive and wear-resistant than others.

And the biology: different social structures would lead to different morals, I think. Pack animals have pack virtues and vices: obedience, subservience, and the like, just out of biology. Carnivores and herbivores live differently, need different traits to survive; and if intelligent, will probably start with deciding those traits are moral and so decreed by the Fox-God or the Rabbit-God. (That's where many of the best and worst of human morals come from, after all: from social monkeys that will eat anything.)

Those species with a short lifespan would, to put it youthfully, so totally get exploited by the long-lived species. Imagine a tribe whose intellectual capacity stops at the level of a sixteen-year-old human, not seemingly, because of difficulties of culture, language and stimulus, but because after that the animal dies. It's not that the work-filled life leaves no time for intellectual advancement; there is no time, work or no work. Human sixteen-year-olds feel smart, but aren't all that; that animal tribe would be hoodwinked over and over again, until it got used to the fact, or became very unwilling to talk to strangers. There would be lackey tribes, and savage, suspicious isolationists; and king species... but no interbreeding. Even I know enough biology to say that is impossible.

In human history, marriages and interbreeding have been very good in making humans live with each other --- how about a situation where a fox is a fox and a pig is a pig, and the two shall never mix? That situation just screams the ease of genocide. There are no half-pigs, or foxes with a pig grandparent; if one species decides it doesn't like another, the line is drawn clear, and deadly. And beyond extermination, if a king species decides it is the only one fit to rule, its rule will not be diluted by bed-hopping. (I think it's an inevitable effect of intelligence that there will be bestiality between all species, rishathra, or at least between those with the power, and those without... but this may not be a meaningful speculation for a children's book.)

(Marriages and sex... well, one would need to think about those too, but not narration material, no.)

Mind you, "different species will have different cultures" needs sufficient numbers; I have no idea how many rabbits, squirrels, foxes, etc., there are per a square kilometer of forest. And if, as an effect of intelligence, communities will form... what will a rabbit village look like? How long will it be able to sustain itself by foraging? Will there be carrot fields cared for by rabbitses? How much social interaction do you need for language, and for culture?

Where, on the incline from animals to the stone age, to Ur, to Carthage, to medieval folks, to the Renaissance and to mobile phones, are these animal cultures? (No badgers with jetpacks, thank you very much. Not my genre.) Remember that each "stage" builds on those before --- unless one can look at humans, or swans, and leap-frog into the neighbor's utopia. But what do the neighbors think of badgers in waistcoats? Would it be the easiest to just assume humanity has gone extinct, giving the animals room to roam and grow without their cultures being the uneasy refuse of humanity... or does it stretch credulity and imagination too much to try to have them totally independent of humankind, except for old graves and rotting concrete? (Or is there someone in orbit, chuckling at what the medieval badgers do? If so, a very bored posthuman doing long-range sociological experimentation, or an AI unsure if this is what the last order for "the happiness of almost human pets" meant?)

Also mind you, "different cultures" should not be taken as "good cultures and evil cultures", much less "good and evil species". I very much doubt some would be kind, hospitable herbivores full of love, cheer and cuddly modern values, and others cold treacherous vermin red in tooth and fang. Each culture would have its ups and downs; and a culture, or a species, would not fully define each individual, except as far as stereotypes and biological imperatives kick them in the head. ("He's a badger. Badgers are no good. Lazy, stupid, venal, greedy, not worth your trust. Tell that badger to get out!" --- that generalization would be nearly as bad tosh about animals as it is about human groups.)

Dogs don't seem to top thirty years, and average much less; this means the greatest dog thinkers will need to get their ideas early. A dog university won't offer doctoral degrees; the students would die of old age before graduating.

Or squirrels. One shouldn't assume that if the average squirrel lifespan is 16 years, then squirrels magically get a huge vocabulary when two years old, just so that they can have a culture. No, they will (as I see it) grow up like human children do: slowly, maturing physically quicker than mentally. (That's a huge problem: say two thirds of your society is mobile, able to forage and to survive... but mentally on the level of not particularly bright human toddlers. What manners does that lead to? What laws?)

There's a horror story for you: to be on the mental level of a ten-year-old, and going upwards, and already a grandfather and a survivor of a decade of mostly instinct-based forest life. To, for the first time, consider yourself as a you, as a separate being with goals and desires beyond food and shelter... and to know that your children have children already, you have no idea who you had those children with, and in five years you will be so frail you'll likely to be et by a fox.

For something worse, consider the fox. How about developing an adult sense of self and a sense of morals, and then realizing you've been hunting, killing and eating other sentient beings all your life? That means instant denial; carnivore societies would not regard the meat species as precious, important minds; they would be talking meat, with "meat" being more important than "talk". There might be sporadic attempts at vegetarianism; but overall I think those would be societies more cruel and callous than anything I can think of. Humans are proficient in racism, but even the worst racist doesn't need to accept the death and devaluation of a sub-being as the price of their every meal. Even the most callous capitalist doesn't actually kill and eat the workers pursuing his profits.

Would the foxes and wolves prefer "live hunts", or capture and breed particularly tasty species and then hunt and kill them for sport? Less chance of the stupid prey fighting back, not knowing it is made for defeat, that way. And how do the meat species dare to organize and fight back? Would they have the proud hunting foxes starve? Why, the Fox-God says it is the duty of the weak to be meat for the strong; there's no shame in the weak going that way...

And back to squirrels. With the squirrel lifespan averaging off at sixteen, well, the old geezers will have the maturity of teens; which is to say, maybe my idea of Gangster Squirrels wasn't that far off. I'm not sure what a society would look like when there are no adults, no middle-aged people, no old people; but a sort of a primitive gang-based life seems right to me.

Meanwhile, the swans not only fly; they live, even without medical intervention, decades longer than humans do. The Gangster Squirrels would be bowing at their Swan Overlords pretty quickly; or then wiped off the face of the forest, driven screaming into hiding. And swans can fly, and paddle --- they have an air force and a navy from the start! For land use, I'm sure there are fox mercenaries that can be hired for the spoils and a loser buffet.

Consider carps, too --- there are no carps in Finland, but maybe there are other long-lived fish species? --- a lifespan of a century, and a pond that will not have swans or squirrels invading it in any hurry. And lakes don't have any easy avenues of escape, if you are an underwater creature: if a would-be fish empress closes the rivers, she can guarantee there will be no escape for her enemies.

As for dryland suits for the fish, well, really, how cartoonish can you get? First you'd need tools, which fish are not best built to use; then you'd need the materials and the ingenuity to conceive of and to create a fishbowl and some mechanical legs or treads. All without fire, without glass, without smelting and forges, mind you; fire is difficult underwater. Though there could be mines, deep into the silt; and baskets of woven weeds --- but for some reason I can't escape the thought that any fishbowl might be unpleasantly organic, fish being known for having all kinds of air bladders in them, that could be cut away and sewn into waterproof sacs.

The fish may come out of the lakes, but it will take a long march along the road of technology before they do.

Though they could always obtain some tools by capsizing a boat...

"That lake? Jake, we don't go boating on that lake. Too many good boats and men been lost on that lake..."

The problem here is that the horror movie scenario of a murderous, suddenly cunning animal species is familiar to all; but what does it look like when a scenario of less ferocity persists? When animal intelligence is a fact known for years, decades, for all of human history? (Well, it would be easier for the animal rights movement. If you can hear a cow saying "Please don't eat me!" in English, well...)

If animal intelligence is a new, local thing, there will be hordes of curious biologists, or gruff animal control officers. But if it is an old thing, there will be... enduring oppression of Animal-Americans? Actual voting asses and elephants? Because animals keeping up a charade of stupidity is an... an asinine thought; such a conspiracy would never last. And without it, people of all species need to deal with the situation.

I'd probably need to go with an isolated forest, with not many humans around; otherwise the story would quickly become a leaden metaphor for foreign people, integration, and racism, which is a very bad idea if you have species that really are fundamentally different. ("Why do the white swans rule? Because they are superior. It is the nature's law that the superior species rules...")

But language. If squirrels die as teenagers, I don't see much great poetry coming from them. (A cheap shot, yes.) Also, I'm not sure how many languages there will be. Multiple ones for each species, if they are divided and isolated; the same language across multiple species, with varying dialects, if there is One Species To Rule Them All. A squirrel grunting a few syllable of Swan, bartering with a squinty-eyed pig for a few acorns...

All above has been assuming the idea of "talking animals" is much like the issue of "talking humans": biology and its consequences. All is different if you have only a handful of sentient, intelligent animals with a language: not Redwall but the Winnie-the-Pooh gang, a small group set apart from the bestial majority of animals. Then there's no great disruption in the order of the world, and no great consequences.

There I'd go for the why: Why, all of a sudden, a bear, a cat and a sparrow think much like humans do, feel like humans do, use the language humans do? How can they all of a sudden do that? And who are they --- is this sudden awareness a possession, or an amplification? Language is not something to be poured into a person's head, and culture even less so: how come the cat can quote Shakespeare, not even having hands for leafing through the pages? Has she seen it on stage? How can the bear do mathematics, not having had any natural reason for developing those abilities? He she picked it up, doing tricks? And how on Earth the sparrow, small enough to fit in a human hand, can have the brain capacity to behave socially like humans do, instead of madly pecking and cacking all over the place? Not by the processes of nature.

What then, but a quest after answers. Is it "magic", rebirth, the surgical work of a mad computer seeking agents, or something more bizarre? Who, how, why? ("You are", the Man in Black drawled, "My people. I made you people. How dare you disobey me, Mr. Bear? Now go and get me those Russian nuclear secrets!")

Turns out "talking animals" is an interesting idea, with the potential for very heavily screwed up tales in it, tales far beyond Winnie-the-Pooh or Redwall... and far beyond the point where the potential publisher says, "I think your future lies with the Xerox machine."

No responses yet

Older posts »