Thinking as a Hobby

Get Email Updates
Email Me

Admin Password

Remember Me

3478322 Curiosities served
Share on Facebook

CogSci 2007: Day One
Previous Entry :: Next Entry

Read/Post Comments (0)

Annual Meeting of the Cognitive Science Society 2007
Nashville, TN

I attended the Psychonomics Society conference last year, and I'd heard that the acceptance rate was lower and the standards higher at CogSci. Initially I wasn't very impressed with the overall quality of the presentations, but things seemed to get better as the conference went on. I didn't like the scheduling. Each day they had a plenary session starting at 9am, with paper sessions every hour and a half. There was an hour and half break for lunch, but in the evening, there was typically another plenary session, immediately followed by a poster session, all the way up until 9pm. So they basically didn't give us a dinner break, which was dumb. It wouldn't have been dumb if they'd served finger sandwiches or something to hold you through the poster sessions, but they didn't.

Anyway, here's a day-by-day breakdown of the conference based on the sessions I attended:

Day 1: Thursday, August 2nd

Session 8-02-2A: Emotion 10:30am-12:00pm

I was still pretty tired from the 10-hour trip the day before, so I wasn't entirely alert. But one thing that struck me was probably the reason so many researchers avoid dealing with emotion in cognitive research. One, it's hard. Two, the concepts are very poorly-defined. There was still some interesting stuff. The talk on "Facial Features for Affective State Detection in Learning Environments" by McDaniel et al. dealt with attempts to monitor students' emotional state while they were interacting with an automated tutoring program. They focused on four states: flow (engagement), frustration, confusion, and boredom. They were drawing on data suggesting that the most progress was made with students who cycled between flow and confusion. They also noted that certain states had more inertia, in that students were likely to remain in a bored state than in a flow state. They attempted to monitor emotional states every 20 sec., and the goal was to automate that process, so that the tutor could react directly to the students' state in order to optimize the tutoring experience. They talked about the difficulty of assessing affect on the basis of facial features such as mouth and eye movement. Ultimately they ended up using an eye tracking system that in itself was not incredibly reliable (compared with human judgments of emotion), but they combined it with analyses of posture, speech, and features of chat dialogue with the tutor, the system achieved assessments comparable to human judges. Here's the website to AutoTutor if anyone's interested.

The other talk I found interesting was an attempt to model emotion and incorporate it as a module in the SOAR architecture. The talk was "Computational Modeling of Mood and Feeling from Emotion" by Robert Marinier, under the supervision of John Laird. I applaud them for their attempt to try to model emotion, but I'm pretty skeptical about their representation. What they were calling "emotion" was basically an assessment of the agent's goal state based on a number of criteria, such as "power" (e.g. the agent may have feel powerless in a fear state, but still feel in control in an anger state). Emotional states were represented by vectors. They distinguished between mood, feeling, and emotion. I believe mood was a running average of emotional states, emotion was the state at any given point in time, and feeling was the subjective interpretation of mood combined with emotion. I think there's a lot more to emotion than this, but again, I give them kudos for even approaching the problem.

The last talk "Anger in a Just World? The Impact of Cultural Concepts on Cognition and Emotion" by Bender et al. dealt with cultural differences between the people of Tonga and Germans in relation to anger and their views regarding how "just" the world is. Tongans tend to be less angry than members of other cultures, and the researchers were interested in how this correlated with their worldview. To what extent did members of each culture think the world was just, that good things happened to good people, and bad things to bad people. When something bad happened, were they more likely to blame the person it happened to, e.g. intrinsic, or other people or things, e.g. extrinsic causes. I believe the results tended toward Tongans stronger belief in a just world than Germans, as an explanation for their reduced expression of anger.

Session 8-02-3E: Situated and Embodied Cognition; Robotics 1:30pm-3:00pm

Unfortunately, this was probably the worst session I attended at CogSci. The first speaker was virtually incoherent, both linguistically and conceptually. He was attempting to present "An Embodied Model for Higher-Level Cognition", but the slides were incomprehensible and his words did nothing to clarify them. Also, I have to admit being distracted at the presenter's use of a tree branch as a pointer. Yes, a tree branch.

The best presentation in this session was by Pat Langley, who presented recent work with the Icarus cognitive architecture, which apparently has a lot of overlap with SOAR, and it uses means-end analysis primarily as a way of solving problems, though Icarus is specifically for embodied agents.

Session 8-02-4B: Meaning Representation 3:30pm-5:00pm

I was really dragging by this session, so I actually left early, though I did catch the first two talks. The first centered around the REM-II model which is a Bayesian model of episodic and semantic memory systems. For the "semantic" representations, they used information from the Mindpixel project, which is now defunct because the lead developer killed himself last year. But basically it's similar to the Open Mind Common Sense project at MIT and Cyc, which all attempt to encode semantics by having humans input "common sense" statements such as "Grass is green" and "Milk is something you drink". The hope of such an approach is that some critical mass will be reached which will allow the program to reason and communicate intelligently about the world. I think such approaches are doomed to failure since they're basically deriving semantics in relation to other words, rather than real-world referents. I ran into this approach several times throughout the conference, as the use of LSA (Latent Semantic Analysis) seemed to be widespread. LSA seems like an interesting and useful technique for clustering related words and finding statistical regularities in text, but I have a hard time with people relying on it as a basis for semantics. A word is a representation/referent relationship. Words aren't ultimately defined by other words, but by the things they represent. Using the term "semantics" in this way seems circular to me.

Anyway, I started to nod off during the second talk, about the dimensionality of language, so I stepped out into the lobby to rest. I wanted to be reasonably alert for John Anderson's plenary talk.

Heineken Plenary Talk: 5:30pm-6:30pm
John Anderson
"The Image of Complexity"

I thought the plenary talks got better as the conference progressed. Basically this was a talk in which Anderson compared fMRI data from human subjects to the timing of activation of particular modules in his famous ACT-R cognitive architecture on several mental arithmetic tasks. Anderson described ACT-R in terms of 6 interacting modules: visual input, production, imaginal, retrieval, goal, and manual (output). He claimed these modules corresponded well with the following brain areas (in the same order):

fusiform gyrus <-> visual input module
caudate nucleus <-> production module
posterier parietal cortex <-> imaginal module
prefrontal cortex <-> retrieval module
anterior cingulate <-> goal module
motor cortex <-> manual module

At least I think that's pretty close. He went through it pretty fast. I'm not that familiar with the ACT-R model, but it sounded like the first and last were for input/output, while the production module was something like a central executive. The imaginal module was for mentally representing and manipulating (I think for tasks like mental rotation of objects). The retrieval module sounded like it was for accessing long-term memories related to the problem at hand, while the goal module was for guiding and determining progress on a given problem.

One task was to decompose a number into the sum of one-digit numbers. For example, 25 is 6 + 6 + 6 + 7. The larger the number, the more difficult the problem. There are many different algorithms and many different correct answers. One example was 67. It seemed to me a very simple algorithm would be to decompose it initially into 60 + 7, then iteratively decompose the larger number into 9's and 1's, e.g.:

67 = 60 + 7
60 = 51 + 9
51 = 50 + 1
50 = 41 + 9
41 = 40 + 1

But I believe they forced the subjects to use an algorithm which incrementally halved and rounded each number, e.g.:

67 = 33 + 34
33 = 16 + 17
16 = 8 + 8
17 = 8 + 9
34 = 17 + 17
17 = 8 + 9

This seems more complicated to me, as the problem branches, but I think the subjects were forced to use this strategy. Their BOLD (blood oxygen level-dependent) levels were measured for various brain areas, and the timing of activation of each area was compared to the sequence and duration of activation for modules in ACT-R solving the same problem in the same way. Some sort of manipulation was done to the data, which Anderson referred to as "warping", I think in order to normalize the data for direct comparison. It was technical and he went through it quickly, so it was hard to tell if it was some sort of mathematical sleight-of-hand or not. In the Q&A, someone (it might have been Paul Smolensky) specifically asked about the warping method. Anderson seemed to get a little defensive about it, but insisted that the technique was applied to both the human and model data, and that they were "comparing apples to apples".

Another task was mental multiplication, and subjects were trained to use the "expert" strategy, which involved multiplying from right-to-left rather than the usual left-to-right, carrying numbers as you go. Again, this seemed like a more complicated strategy, which is fine if that's the manipulation they want to make. Anyway, as with the other experiments, the fMRI data very closely matched that of the ACT-R model, with a couple of small exceptions. One was that humans tended to show activation in anticipation of motor output (I believe in these experiments that involved pressing keys), quite a bit earlier than the model. One possibility suggested by a questioner was that the humans were using their fingers to perform arithmetic operations. I was thinking it might be due to a reaction where they think they have the right answer, the motor cortex gets ready to act, then inhibited when they realize they don't actually have the right answer. Subjectively, there are many times when I "almost" right down an answer or part of an answer when working a problem, only to stop myself.

Anyway, it was an interesting talk, but my radar was up a bit due to not really understanding the warping technique and the uncanny overlap in the human and model data. It was a little too tidy, for the most part. And while I buy that the brain is highly modular, I'm skeptical that it's as neatly partitioned as Anderson's model.

We were hungry and very tired by this point, and all the evening plenary sessions ran late. So it was about 7pm. The first poster session was from 7pm-9pm, but we skipped it in lieu of dinner and sleep.

Next: Day Two

Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 All rights reserved.
All content rights reserved by the author.