1:19 PM 0 comments

Guest Post: BP Oil Spill #4

From Satira, Trevor, Guang, and Jeremy

We all know that the media greatly affects how a certain topic or event is perceived by the general public as it certainly did with the BP Oil spill. Our mission was to determine just how certain news agencies portrayed the spill differently. Our specific question was:

Following the initial reports of the spill, what were the differences in the facts reported by Fox News vs. MSNBC?

In order to answer this question, we were going to need to have the data from each source. But how do you get data from articles? We decided that there were three distinct tones that a news agency could project:

Science
Emotion
Political

To be able to classify the different agencies into these categories we needed similar articles from each source. So we found timelines that included 1-2 paragraph summaries of each day during the oil spill crisis from Fox News, MSNBC, and UK’s The Guardian. (We decided to add The Guardian in order to have a source from outside of the USA to compare with the tones of the USA agencies.)

To classify the articles, it was going to take more than “a gut” feeling of how we felt they were toned. We wanted hard data. Therefore, we came up with 5 key words for each category so that we could count how many times those key words appeared in the article. By determining the amount of times they appeared we could then determine if the source was political, emotional, or “sciency”. The key words are as follows:
Science:
Cap
Leak
Failure
Environment (science)
Solution

Emotion:
Environment (emotional)
Animals
Clean-up
Dead/missing/victim
Location/Gulf Coast

Political:
Obama
BP
Barrels
Money
Bi-partisan/parties

In the beginning, we predicted that Fox News would mostly discuss politics and would attack Obama. We expected MSNBC to praise Obama and to take sympathy for the region. We thought that The Guardian, being from another country, would leave out the politics and discuss the science.

Our results were as follows:



From the counts, we put together this graph. To ease confusion I’ll explain its construction. Every time one of the key words was found in say Fox’s time line we would make a tally. We found Obama’s name in the timeline 130 times. We found all of the key words a total of 458 times in Fox’s timeline. Because just using key word frequency would not make sense due to different length timelines we decided to take a percentage. So the 130 Obama’s was divided by the 458 total key words, yielding 28%. Therefore, the graph illustrates the different percentages of the total key words that a certain word was mentioned for each of the sources.

·    Fox News focuses little on oil spill. Most of the discussion attacks President Obama. Only one to two sentences discuss the spill itself, and little science is discussed. As predicted Fox News was biased towards Obama.

·    MSNBC is less political and more economic. Most summaries are quantitative. More science is discussed, but not much more. MSNBC was not as emotional as we expected and it was fairly neutral and discussed the three categories about the same amount.

·    The Guardian was more well-rounded. Most of the summaries contained only updates on the status of the well and surrounding environment. The Guardian, surprisingly, was very focused on politics.

We have determined the pitfalls of this method and have put together a list of what we would do different if faced with a similar task:

·    Include more key words
·    Have one person doing the research. As humans we all view things differently when read.
·    Include more sources.

In the end, we concluded that all of the considered agencies were politically motivated with little science communication. This is expected as mentioned by Jen many times in class, “It is very hard to take the politics out of science.” The mainstream media is more concerned with its ratings/viewership than with exposing the community to science.
5:33 PM 6 comments

Fukishima Sci Comm I Did Not Expect and That May Be Both Inappropriate, Offensive, and/or Inaccurate but that is Also Funny, Maybe








5:38 PM 5 comments

A Science Journalist Critiques Physics

In this Chronicle of Higher Education article, science writer and journalist John Horgan tells physicists to "get real."  He charges that recent popular books on physics suggest a disconnect between physicists and what is really fascinating other readers and thinkers these days.  Neuroscience is doing a better job of capturing our imaginations these days (so to speak).  Am interested in what those of you in physics think....
12:05 PM 0 comments

On Fukushima

Please forgive the radio silence the last few days:  I've been focusing closely on what's happening on Japan, trying to update regularly on Facebook, and also to keep track of what my colleagues and associates are doing related to the earthquake/tsunami/nuclear crisis.

If you are on Facebook, please add the group "Dr. King's Fukushima Power Plant Info Group" to your groups.  Dr. Jeff King is the head of Nuclear Science and Engineering here at CSM, and though he and I sometimes have different perspectives on some of what's happening in Japan, he is an excellent source of technical information on this topic.  He has been widely interviewed by local news sources as well.

For now, let me say that my heart goes out to everyone who is suffering, or who has family or friends suffering, in Japan.  It is heartbreaking to see so many events like these--and those that have occurred in Haiti, Christchurch, Brisbane, Pakistan--happen one after another.  It is a reminder of our common humanity, vulnerability, and strength.

As for what is happening with the nuclear crisis, please keep in mind that we have two blogs in our class devoted to nuclear power:  The Nuke Truth and The Nuclear Option.  Both have posts related to the Fukushima plants.  Columbia Journalism Review also has this piece, related to media coverage of the events.  My own sense of media coverage and science communication tracks closely with Brainard's.

I will say that there is much we don't know yet about the repercussions of what is happening in Japan, both in terms of the tsunami/earthquake recovery and in terms of what the long-term repercussions of the nuclear crisis will be.  There are a wide array of conflicting reports about the state of the reactors, with several respected nuclear sources taking a conservative view (that damage is fairly well controlled, that radiation releases will cause limited harm, etc.) and those wary of nuclear seeing this as indicative of continued over-confidence in the safety of nuclear as an energy source.  These are both ideological stances that can be supported with data, and yet the data we have for now is unclear and contested.  Data provided by the World Nuclear Society and the Japan Atomic Industrial Forum, along with the American Nuclear Society, suggest things are moving in a positive direction, and remind us that this is quite a different event from Chernobyl, for a number of reasons.  The reactions of others, including the US government, some Japanese, and reporters on the ground, suggest that things are still in a complex state of uncertainty.

At this point, we will wait and see what the literal and figurative fallout will be.  I'm not sure there will be a huge pushback against nuclear, the way there was after Three Mile Island, because of the threat of global warming and rising energy needs.  But those who had been tentatively willing to give nuclear a chance because of global warming fears may be less likely to do so now.  Much depends on these next few days, and what happens at the Fukushima plants.  My hope is that there is good news.  Most likely, we will not know what really is happening for some time to come.
7:05 AM 0 comments

Guest Post: BP Oil Spill #3


From Carlos, Mike, and Zach

When studying the oil spill disaster that occurred in the Gulf of Mexico last summer, we wanted to examine if upstream engagement had been used. This led to our question and study of the BP oil spill:

                What forms of upstream engagement did BP use, if any?

This is a very important question, in that it evaluates how well BP was able to engage public contributions to solving the problem, shows the importance of upstream engagement, and evaluates whether or not upstream engagement is actually effective.  

When we began researching, we were very skeptical that BP used any upstream engagement. We thought that BP would want to use only BP-employed scientists and engineers in solving the problem.  When we started this research we constrained our research to only BP press releases, direct quotes from BP representatives, and the BP suggestion website.

Literature Review

So what is upstream engagement? It is defined as public engagement that occurs at an early stage in a process, thus policy changes and major decisions can occur before they are set in stone.  Upstream engagement is required for the main reason that the public no longer blindly has faith in scientists.  The ever skeptical public should have a say in developing technologies and thus upstream engagement seems like the solution.

Some scientists argue for upstream engagement stating that it is the ethical thing to do, while other scientists are skeptical and point out that upstream engagement still assumes the linear transmission model and thus it is oversimplified.

The 1989 Exxon Valdez Spill in Alaska is a good comparison to the 2010 BP Oil Spill.  Exxon had no outside public relations consultants, dismissed involvement of environmental activists, and they refused assistance from local residents in cleanup effort.  Also, top Exxon executives declined to comment until a week after the spill, and they shifted the blame to Alaska and the federal government.  Many criticized Exxon for their inability to provide substantial information quickly, and no form of upstream engagement was used in the cleanup process.

An important note to make is that The Oil Pollution Act of 1990 (passed promptly after the Exxon Valdez spill) does not require any forms of upstream engagement with the public, thus any engagement by BP was voluntary.

Findings

Press releases showed that BP took advice and suggestions from outside industry experts in regards to cleaning the spill and shutting the well down.  Their plans were also technically reviewed by scientists and engineers outside of BP.  A quote from the VP, Kent Wells, said that BP formed a "dream team" of top scientists and engineers in the industry.  Quotes from other spokespersons stated that BP was taking ideas from lots of places, including public suggestions, and that it was not just a PR stunt.

The most significant forms of upstream engagement came in the form of a toll-free tipline and a suggestion website.  The tipline received over 72,000 calls, and the website received over 20,000 ideas within a month of the oil spill occurring. To sort suggestions, BP reviewed then assigned each idea into one of three categories: not possible/feasible, already considered, and feasible.  Anyone who submitted a suggestion received a reply about their idea.  Of the 20,000 ideas posted online, BP stated that 100 of them were feasible and under further review. 

Discussion

BP used upstream engagement, but it was impossible to tell if they actually considered public suggestions, or if it was just a publicity stunt.  They did appear to take suggestions from other industry experts before executing their several different plans.  In the past, upstream engagement has typically been used as a tool to evaluate new technologies (i.e. nanotechnology), where urgency is not an issue. In the case of the BP oil spill, action had to be taken immediately to stop the spill. 

Therefore, considering BP's situation, it is hard to know exactly how effectively upstream engagement can be used.  When a new design is required immediately to stop a spilling oil well, upstream engagement of the public may not be feasible or productive.  Rather, experts in related fields should be contacted for ideas on how to fix the problem, and this is what BP appeared to do.  BPs attempts at engaging the public may be an indication that corporate America is starting to understand the importance and advantages of upstream engagement.

7:56 AM 0 comments

Guest Blog Post: BP Oil Spill, Group 2

A lot can be said about a company from the press releases they issue to the public, especially after a mistake made within the industry. So when the BP oil spill happened, we expected the other oil companies to have a change in their press release frequency and content. After all, some websites claimed that important events should have an impact on how often press releases are published. Although we did not find much evidence online to support our prediction, we expected to see a significant amount of direct references to the oil spill, BP and the Gulf of Mexico in the months following the BP press release.


How did the frequency and content of press releases from other oil companies change following the oil spill?

In order to answer the above research question, we looked at seven other oil companies: Chevron, ExxonMobil, Anadarko, Shell, Transocean, Conoco Phillips, and Marathon. We compared the frequency and content of press releases after the oil spill to those prior to the spill. Our focus was on how the companies portrayed themselves to the public. To do so, we paid particular attention to the tone of each press release.

For each of the companies we graphed the total number of press releases versus time. For the most part, the companies had a linear plot, which signifies that there was no change in frequency of press releases over time. Those that did show a deviation from their previous frequency of releases, such as is the case with Transocean, had a bigger part in the events following the incident. The change in frequency for other companies, such as Marathon and Shell, were unrelated to the BP event. BP, on the other hand, increased how often they published press releases by a factor of 5 immediately after the oil spill. BP wrote one press release on average over 5 days before the disaster, after which BP increased their press release frequency to one every day.

The change in frequency of press releases from other companies, or lack thereof, was rather anticlimactic, but what about the content? How did that differ? The answer to that is that each company reacted differently. However, there was a general trend among most of the companies in which there was increased talk about the safety of facilities.

For Chevron, five of their press releases in the months following the BP incident were focused on safety, stressing how important it is that their facilities operate safely and reliably. This contrasts with their previous habit of focusing on project details, quarterly reports, and topics investors would be more interested in than safety. A couple of press releases did directly mention the oil spill, indicating that Chevron was not ready to ignore the incident entirely, like some did.

In the case of ExxonMobil, their press releases following the incident focused mainly on every day reports and business. There were a few that focused on the spill, safety or the Gulf of Mexico, but most were unrelated. They also issued a statement offering BP any assistance that they might need in cleanup efforts. A good portion were actually oriented around their monetary donations to educational institutions.

The company that perhaps had the biggest change in tone following the spill was Anadarko. They quickly shifted the blame to BP, claiming the event was “the direct result of BP's reckless decisions and actions."

Another one of the companies that stressed the importance of safety in a few of their releases was Shell. Some of their other releases had information pertaining to operations in the Gulf.

Following the initial reports of the fire on the Deepwater Horizon rig, Transocean released four informative articles detailing the facts about the situation. A little later, they issued a statement of clarification to the public. Two other releases provided insurance based information.

Conoco Phillips had very little press releases pertaining to the BP oil spill. In fact, only one of their releases was related, and it was one that was shared among three other companies detailing plans to implement a system to prevent or contain any similar future events in the Gulf.

Of all the companies, Marathon Oil seemed to be the most oblivious or ignorant of the situation with the spill. There were a few news releases about operations in the Gulf, but none of them mentioned the BP incident. There was a slight increase in frequency of releases following the spill, but they were all unrelated to the event.

From our analysis, we can conclude that the BP oil spill did not have much of an impact on the larger oil companies and their frequency and content of press releases. The biggest change was the stressing of the importance of safety amongst many companies. However, we noticed that there was not a clear trend in how the companies would respond. Some acknowledged the disaster directly while others feigned ignorance. This demonstrates that oil companies use a variety of methods to deal with a disaster in the industry, and how they respond is usually an indication of how they wish to be portrayed by the public and especially investors. We were initially wrong in our theory of how the press releases would change, but after our research, we found that each company reacted differently to the events, and a lot of that was a result of much more than just the BP incident.

11:14 AM 2 comments

Communicating Science: Midterm Evals

Thanks to all of you who completed midterm evaluation forms in class last night.  You provided a ton of valuable feedback.  More than any class I've taught, the feedback in this course was incredibly diverse and varied.  I did some work grouping like responses together to figure out which comments to act on.  I'm happy to provide you with those groupings (they're lengthy!) but for this post I'll just reflect on the areas that came up the most in your evaluations.  Please leave more feedback in the comments if you like.

Also, you'll see that the tone here is a bit formal--I'm going to just plop this in the "Teaching Reflection" section of my yearly review next year.  It also doesn't have an introduction yet--I'll write that at the end of the semester.  Thanks!




Readings

Students were fairly uniform in their dislike of the Holliman text.  Sensing this, I asked them a few weeks ago if they would like to disengage from the second Holliman text and switch to scanned readings.  Students agreed by majority and returned the second Holliman texts for refunds.  They seem to be enjoying the scanned readings and discussion has improved, though one student notes that they seem “biased” (without providing more information about that) and another notes that they are hard to relate to because they are about science and not engineering.

I won’t order the Holliman texts again for next year but am instead working on writing and assembling a sci comm reader with scholar Susanna Priest.  Students seem to be enjoying “classic” sci comm texts, and these can be supplemented with more current readings.  More readings about technology and engineering communication can also be included—this is a good suggestion.

I think students will enjoy the upcoming readings we have after the break.  They will let me know!

Discussion

About 1/3 of the students noted that they enjoy class discussions of readings, but a handful noted that the discussions sometimes seem too theoretical or wandering (there was particular frustration after the class in which we tried to define “science”).  I am frustrated with lack of participation from nearly half the students in the class.  I think this lack of participation is partly attributable to the size of the classroom—too many students in too small a space—but may also have to do with our slow start with texts or problems in the way I lead discussion.  I’ll have to think more about that.

Students also seem to enjoy discussion of good blogs or good blogging strategies, if kept brief, because these apply directly to their own blog work.  I also hope that some of the boredom/frustration with the length of the class (which will be split in subsequent semesters) will be alleviated when we begin the livelier practicum section of the semester after spring break.

The Blog Project

Apart from a few dissenters, the students seem by and large supportive of the blog project, both learning from it and enjoying it.  One student noted that it’s a good idea but needs to be better “executed” but provides no specifics.  I could fill in the blanks from some other student comments:  some students want to be assigned more “hard” deadlines for the projects; others are tired of their topics and wish they had understood the grading structure earlier on because they feel trapped in blogs they are bored with or that aren’t working.

The second time the course is taught, the grading rubric will already be in place and, having taught the course once already, I’ll be more ready to provide guidance and advice early on to students who are not comfortable with the idea of blogging.  For example, spending more time at the beginning of the course examining good blogs (like Revkin’s Dot Earth), exposing students to the medium, and giving them time to experiment with blog topics should alleviate some of these anxieties.

Grading

Students were visibly upset by the grades they received on the first reading quizzes (though they were only worth 25 points).  I was surprised at this given that they were given the questions ahead of time in class and could work together to discuss answers.  But performance was bad on that first quiz.  Grades on the second quiz improved dramatically, but students still seemed anxious about them and requested take-home quizzes or essay questions.  I see no problem with adopting these strategies for the rest of the semester and in future iterations of the class.

Some students like the flexibility of the blog project—they like not having posts assigned or minimum requirements each week.  Others are ill at ease with this approach, preferring more structure and guidelines.  Some feel locked into their projects and because of poor early grades are afraid of failure for the rest of the semester.  My hope is that their hard work will pay off on the second (of three) grading period for the blog and they will see improvement in their grades.

I prefer to keep things open because this mimics project work in the professional world and because it allows exploration and freedom.  But it is much harder to pull off successfully in such a large class where I can’t do much one-on-one instruction.  Keeping up with 35 blogs is incredibly time-consuming.

Class Size and Location

A substantial amount of pressure has been placed on LAIS professors to accept large numbers of students into 400-level classrooms (beyond the 25 cap), and classroom space is also at a premium on campus.  We are somewhat limited because we need to be in a computerized classroom.  As a result, we are in a fairly cramped room and the size of the class makes it difficult to have good discussion and to provide enough tailored feedback to students.  I have made some adjustments, and realize economic times are difficult, but I hope administrators at the division and university levels see that increasing class sizes does jeopardize the quality of liberal arts education.  In the meanwhile, I will do some research and see if we can move this class, this semester, to a better space.

Workload

Mines students have an excellent reputation as being hard workers, very focused on their studies.  I have the utmost respect for the pressures they face given their many obligations.  And I hear the comments that the class feels like a lot of work.  However, there is about the same or less reading in my class as there should be in other LAIS seminars (we have studied this in the Division) and the blog writing will end up constituting the same amount of writing or less than formal writing assignments for other courses (we have also studied this).  So, I know you’re working hard, but you are meeting the standards the Division and the university have set for 400-level work.

I’ll take into consideration the many other good comments I received.  Thank you for your feedback!
4:29 PM 0 comments

ENIAC Beyond Throws Down

If you haven't seen it already, you should check out Carl's hilarious challenge to quit asking "What do you think?" at the end of our blog posts (moi, guilty as charged).

But it does leave open the problem:  how to end a blog post?  We are trying to engage our readers, after all, and sometimes it feels abrupt to just stop writing.

So how to end
2:41 PM 0 comments

Guest Post: The BP Oil Spill, Part 1

Here is the first student guest post, presented by our research team from last week.  Enjoy!



We don't want to start out saying that the BP oil spill is all President Obama's fault, but this does summarize our findings of our study.  It would be absolutely absurd to state that the BP oil spill was a result of anything that President Obama did, but it seems that the communication channels somehow led to this kind of response.  

So what did we set out to do?  After much difficulty and modification, we decided to examine how well the communication chain, initiating with BP's press release regarding testing of the well, worked in getting the message across to the general public.  As it turns out, this is a bit of a tricky question to answer.  So how did we decided to go about it?  Well, we had to put numbers to exactly how the general public received the information.  The information in the press release was communicated to the public through the mass media, and specifically in our case through a CNN article in which people were able to comment on what they thought of the article.  These comments allowed us to measure what the reaction to the information was.

Just to give you an idea what we were looking at, perhaps a brief summary of the articles is in order.  The BP press release was put out to tell about the test that was taking place on the well.  It very briefly said that the well was capped to test the integrity of the well.  The testing was stated to last no longer than 48 hours.  This was the basic gist of the press release, but then CNN got a hold of it.

CNN quickly stated the basic facts, but it was easy to tell that the premise of the BP press release was not where CNN was trying to go with the article.  In essence, the article became a human interest story.

"While I am pleased that oil no longer is flowing freely into the Gulf of Mexico, there is more work to do to help families, businesses and communities on the Gulf Coast as they recover from this disaster."
As seen here, the article brings in the human interest story of the BP oil spill.  It also begins to give false hope about the spill ending, though BP made sure that it was clear this was just a test and not a permanent capping of the well.  This is simply an inaccurate fact, but seems to make the issue more press worthy.

This made it interesting to look how people received the information.  Really we found that many people did not receive the information put out by BP about the test.  The comments on the article turned into a debate about the handling of the whole oil spill, though this was really not the subject of the press release.

As far as putting numbers to the comments, we broke the comments up into several categories, namely whether they were positive, negative, neutral, or not applicable.  We choose theses categories initially to test whether people were receiving the messages put out by BP in a positive way or not.  Reading the first 200 comments on the CNN article we found:
Positive:  5%
Negative:  41.5%
Neutral:  10%
Not Applicable:  43.5%

From the results it easy to see that people generally have a negative outlook on BP's handling of the oil spill, but we are not sure that the way we set up this experiment really gave us the meaningful information that we wanted.  It turned out that it is difficult to classify a comment as simply positive or negative.  Many of the comments were about different issues that what the article was even talking about.  We did not really account for the outside bias that was previously created toward the situation when doing our analysis.  It would have been more meaningful to break up the categories further into section that described exactly what the people were being positive and negative about.  It wasn't just the testing that they were frustrated about.  Actually, conversation about the testing of the well accounted for a very small percentage of the comments.

One thing is for sure though.  The communication chain is not direct in communicating what the original source intended, nor do we think that it should be held to this standard.  The concerning part is that the intended message put out by BP did not make it to the public.  That was the most conclusive part of this study.  It is hard to make any other kind of statement about the communication of the press release because the communication chain was so overrun by outside bias being brought to the article.  What is news worthy and the intended message do not always coincide although.  This is definitely the case with CNN's transmission of the information.

This tells us a lot about how press releases should be thought about.  We can no longer depend on the media to put out the original message, but instead must be willing to take it into our hands as to getting our message across.  We have to be advocates in communicating to the public what needs to be communicated!

Posted by – Aaron Ackerman, Carl Blum, JD, Dan

2:15 PM 1 comments

Leitzell on Sharing Data with the Public

I had the pleasure of working with some CU grad students last fall in a class on Science, Technology, and Society (commonly known as STS).  One of those students, Katherine Leitzell, has written a piece for Science Progress (a publication of the progressive think tank Center for American Progress) that directly relates to some of the discussions we've been having in class about how and when to provide the public access to scientific data.

In the piece "Take the Data to the People," Leitzell profiles a website created by the National Snow and Ice Data Center called the Arctic Sea Ice News & Analysis website.  Leitzell writes,

"The site, now partially funded by a NASA grant, includes daily updates of sea ice data, along with monthly to weekly posts written by scientists in collaboration with science writers.  The posts provide context for the data....   We also address questions brought up by readers.  Making data available to the public is a popular idea, but simply providing access to data is not enough.  Most NSIDC data were publicly available before we started the Arctic Sea Ice News & Analysis website--they were just difficult for a nonscientists to find and interpret.  Scientific terms such as bias, statistical significance, and error can be easily misinterpreted and need explanation."



This is fascinating, isn't it?  The site is providing an opportunity for scientists to work with professional communicators (which must be a learning opportunity for both groups) to present their work to the public in a way that will make sense to laypeople AND other scientists, and which also provides raw data to citizens.

This sort of communicative endeavor doesn't happen without risks, of course.  Leitzell also writes that the site "receives a surprising amount of criticism," mostly from those who think climate change is bunk. But teachers and others value the site.

This makes me think about our own blogging.  How often do we just provide data or present science or technological concepts without doing some sort of interpretation of that data?  This goes back to our old discussions about the deficit model, I suppose.  But it does suggest that we need to do more than just present the data, or say "isn't this cool"--we have to do the hard work of explaining what it could mean and how it might be interpreted.  And, as is the case with the NSIDC website, that might invite some harsh criticism.  But to do otherwise is probably not communicating at all.