Jason Becker
June 4, 2013

Kevin Carey has a great new piece on his experience taking a Massive Open Online Course (MOOC). If you have any interest in blended learning, online education, and higher education policy, I would consider this a must read.

He carefully addresses many of the common concerns to online only course work. Can it be truly rigorous? Yes. Do you lose the “magic” of taking a big lecture course in person? Maybe, but there’s more magic in pressing pause and rewind on a video. Can you assess 40,000 students in a rigorous way? Yes.

Carey concludes that the cost of attending an institute of higher education, and of paying so many PhD instructors for lecture courses, is astronomically high considering the small value-add that arguably exists between the best online version of a course and what most professors and lecturers offer.

The implication for Carey is clear: online education done extremely well can be as effective as some university courses today, and most university courses tomorrow.

I agree that lectures in person are not better than online lectures. I also agree that intellectually stimulating conversation about content/material can happen online in forums, over video chats, using popular social networks, etc. I even agree that it is possible to do rigorous assessment in many domains 1. I am quite confident that a well-implemented MOOC could replace the typical college lecture course today on many campuses. The problem with MOOCs is not their ability to replicate the quality material aspects of a college course.

What is the Problem with MOOCs?

Carey spent 15 hours a week watching lectures, working through problem sets, and communicating with fellow students to complete the course work with satisfactory outcomes. Think about the amount of perseverance it takes to work that hard independently from a computer in your home. There is an entire world of books and webpages dedicated to helping upperclass, knowledge economy employees work productively from their home offices because while some thrive, many struggle to be productive. Professionals find they have to be careful to close their doors, set clear boundaries with family members around work hours, put on a suit and a tie like they are going into work, and not use the office for non-work activities, to name a few techniques, to ensure they are productive working remotely 2.

Non-traditional students, first-generation college students, and poor students are all likely to face challenges recreating effective work spaces. This is not a matter of bandwidth, quality computer access, or digital skills. All of these things are real challenges, but will disappear within the next decade. What’s not likely to change is the need for quiet, comfortable space to work seriously for hours on end, uninterrupted.

But these students will also miss out on another key part of what makes college an effective place for learning– you’re in a place that’s dedicated to learning surrounded by people dedicated to the same pursuit. When you see people every day who are walking into class 3 there is a sense that you are a part of a learning community. There is a pressure to succeed because you see the hard work of your peers. I truly believe that making going to class and studying a habit is greatly supported by being surrounded by that habit. Look no further than group study areas and coffee shops around universities to see tons of students, outside of their dorms and in pop up communal office. This is true even at Research I universities, even among students who do not share classes. Those students know how to use forums, social networking, instant messages and more.

I am not saying that college is about unexpected collisions of people or ideas in some nebulous way. I mean quite literally that being a good student is partly possible because you’re surrounded by students.

These supports are not irreplaceable. They do not require $50,000 a year. On this, I completely agree with Carey. But the reality is the students who will easily adapt and find substitute supports, regardless of cost, will not be the ones to use MOOCs at the start.

Community colleges are the major target with MOOCs. They are already struggling to stay low cost institutions, their faculty are generally less credentialed and have substantially less power than tenured faculty at research institutions. They also are less likely to be able to make the case that their lecture is world class. However, their students are the ones that have the most to lose.

Community college students are about to lose a critical support: the culture of being students with other students.

Academic preparation is frequently discussed when trying to predict college success, but I don’t think we should dismiss the importance of social integration. Only an extreme classist view could believe that MOOCs remove the need for social integration because the “institution” of traditional universities and colleges no longer exist. We will simply accomplish shifting the burden of a new, challenging integration to those who are already struggling.

A World of Free, Decentralized Higher Education

I am also concerned with a future where MOOCs are broadly available, very inexpensive, and degrees are not considered important. This may seem like an ideal end stage, where skill and knowledge are what is rewarded and the “gatekeepers” to traditional power have fallen.

Yet, this highly market-driven future is likely to continue to exacerbate the difference between the haves and have-nots, a decidedly poor outcome. Education markets struggle from information failures. Need more here

Proceed with Caution

I am honestly thrilled about MOOCs. I just feel more cautious about their public policy implications in the immediate term. Let’s start with rigorous experiments and lowering the costs at our most elite institutions before we decide to “solve” remedial course work and the higher education system writ large in one fell swoop.


  1. Certainly in the STEM fields. It may be more challenging to address humanities and social sciences that are heavy on writing. ↩︎

  2. Yes, many people think it’s crazy that folks find it challenging to work from home. I promise you, this is a real thing, mostly experience by people who have actually tried working from home. Hence, coworking and other solutions. ↩︎

  3. Not just your class, and not just from dorm rooms but also from cars. ↩︎

June 1, 2013

Apple will be revealing new details for both of its major operating systems at WWDC on June 10, 2013. The focus of much speculation has been how Apple will improve multi-tasking and inter-app communication in iOS7. As batteries have grown, CPUs have become increasingly powerful, and the application ecosystem has matured 1, the iOS sandboxing model has felt increasingly limiting and outdated.

I think that there is a simple change that could dramatically increase the effectiveness of multitasking on iOS by re-examining how application switching works.

Scrolling through a long list of applications, either through the basement shelf or via the four-finger gesture on an iPad, is both slow and lacking in contextual cues. In a simple case where I am working with two applications simultaneously, how do I switch between them? The list of open applications always places the current application in the first position. The previously used application sits in the second position. The first time I want to change to another application this is not so bad. I move to the “right” on the list to progress forward into the next application 2.

The trouble comes when I want to return to where I was just working. The most natural mental model for this switch is to move “left” in the list. I moved “right” to get here and millions of years of evolution has taught me the the “undo button” for moving right is to move left 3. But of course, when I attempt to move “left”, I find no destination 4. I can pop an application from anywhere on the list, but I can only prepend new applications to the list 5.

Apple needs to move away from its linear thinking and enter the second dimension.

What if I could drag apps in the switcher on top of each other to make a new stack, not unlike the option on the OS X launch bar? Throughout this stack, position is maintained regardless of which application is in use/was last used. I can always move up from Chrome to ByWord, and down to Good Reader, for example, if I was writing a report. Apple might call this a Stack, mirroring the term in OSX, but I would prefer this to be called a Flow.

The goal of this feature is to organize a Flow for a particular task that requires using multiple apps. One feature might be saving a “Flow”, this way each time I want to write a blog post, I tap the same home screen button and the same four apps in the same order launch in a Flow, ready for easy switching using the familiar four-finger swipe gesture up and down. I no longer have to worry about the sequence I have recently accessed applications which is confusing and requires me to look at the app switcher draw or needlessly and repeatedly swipe through applications. I never have to worry about lingering too long on one application while swiping through and switching to that app, changing my position to the origin of the list and starting over again.

For all the calls for complex inter-app communication or having multiple apps active on the screen at the same time, it seems a simple interface change to application switching could complete change the way we multitask on iOS.


  1. And Federico Viticci has either shown us the light or gone completely mad. ↩︎

  2. For now, lets assume the right application is next in the stack. I’ll get to that issue with my second change. ↩︎

  3. You are in a green room. > Go east. You are in a red room. > Go west. You are in a green room. ↩︎

  4. You can't go that way. ↩︎

  5. I don’t know enough about data structures yet to name what’s going on here. I am tempted to think that the challenge is they have presented a list to users, with a decidedly horizontal metaphor, when they actually have created something more akin to a stack, with a decidedly vertical metaphor. But a stack isn’t quite the right way to understand the app switcher. You can “pop” an app from any arbitrary position on the app switcher, but funny enough can only push a new app on to the top of the switcher. ↩︎

May 29, 2013

One thing I really dislike about Google Reader is it replaces the links to posts in my RSS feed. My Pinboard account is littered with links that start with http://feedproxy.google.com. I am quite concerned that with the demise of Google Reader on July 1, 2013, these redirects will no longer work.

It’s not just Google that obscures the actually address of links on the internet. The popularity of using link shortening services, both to save characters on Twitter and to collect analytics, has proliferated the Internet of Redirects.

Worse still, after I am done cutting through redirects, I often find that the ultimate link include all kinds of extraneous attributes, most especially a barrage of utm_* campaign tracking.

Now, I understand why all of this is happening and the importance of the services and analytics this link cruft provides. I am quite happy to click on shortened links, move through all the redirects, and let sites know just how I found them. But quite often, like when using a bookmarking service or writing a blog post, I just want the simple, plain text URL that gets me directly to the permanent home of the content.

One part of my workflow to deal with link cruft is a TextExpander snippet I call cleanURL. It triggers a simple Python script that grabs the URL in my clipboard, traces through the redirects to the final destination, then strips links of campaign tracking attributes, and ultimately pastes a new URL that is much “cleaner”.

Below I have provided the script. I hope it is useful to some other folks, and I would love some recommendations for additional “cleaning” that could be performed.

My next task is expanding this script to work with Pinboard so that I can clean up all my links before the end of the month when Google Reader goes belly up.

:::python
#!/usr/bin/python
import requests
import sys
from re import search
from subprocess import check_output

url = check_output('pbpaste')

# Go through the redirects to get the destination URL
r = requests.get(url)

# Look for utm attributes
match =  search(r'[?&#]utm_', r.url)

# Because I'm not smart and trigger this with
# already clean URLs
if match:
  cleanURL = r.url.split(match.group())[0]
else:
  cleanURL = r.url

print cleanURL
December 19, 2012

Update: Please see below for two solutions.

I have grown increasingly unhappy with Wordpress lately. My blog is simple. My design tastes are simple. My needs are simple. I like control. I am a geek. And I really need an excuse to learn Python, which seems to be rapidly growing into one of the most important programming languages for a data analyst.

I have decided to migrate this blog over to Pelican, a static site generator written in Python. Static sites are the “classic” way to do a webpage– just upload a bunch of HTML and CSS files, maybe some Javascript. But no databases and no constructing the page a user sees in the browser as they request it. This puts substantially less strain on a web server and makes it far easier to export and move a webpage since all you need to do is duplicate files. What makes static sites a real pain is that there is a lot of repetition. Folks adopted dynamic sites that use content management system so that they can write a page called “post.php” one time, and for each unique post just query a database for the unique content. The frame around the post, layout, components, etc are all just written once. Static site generators allow you to build a webpage using a similar, but far more stripped down, layout system. However, rather than generate each page on the web server, you generate each page by running a script locally that transforms plain text documents into well-formed HTML/CSS. Then you can just upload a directory and the whole site is ready to go.

Pelican comes with a pretty good script that will take Wordpress XML that’s available via the built-in export tools and transform each post into a reStructuredText files, a format similar to Markdown. I prefer Markdown so I used pandoc to convert all my *.rst posts into *.md files.

So far, so good.

But one of the really big problems I had with Wordpress was a growing dependency on plugins that added non-standard, text-based markup in my posts that would be rendered a particular way. For example, text surrounded by two parenthesis, [^0], became a footnote. For code syntax highlighting, I use a “short code”, which puts sourcecode language='r', for example, between brackets []. All of these plugins have been great, but now when you try to export a post you get the non-standard markup in-line as part of your posts. It makes it very difficult to recreate a post the way it looks today.

This presents a great opportunity to learn a little Python. So I have begun to scrounge together some basic Python knowledge to write some scripts to clean up my Markdown files and convert the syntax of the short codes that I have used to properly formatted Markdown so that when I run the pelican script it will accurately reproduce each post.

Unfortunately, I’ve hit a snag with my very first attempt. Footnotes are a big deal to me and have standard Markdown interpretation. In Markdown, footnotes are inserted in the text where [^#] appears in the text, where # = the footnote identifier/key. Then, at the end of the document, surrounded by new lines, the footnote text is found with [^#]: footnote text where # is the same identifier. So I needed to write a script that found each instance of text surrounded by two parentheses, insert the [^#] part in place of the footnote, and then add the footnote at the bottom of the post in the right format.

I created a test text file:

1
2
3
This is a test ((test footnote)).
And here is another test ((footnote2)). Why not add a third? ((Three
Three)).

The goal was to end up with a file like this:

1
2
3
4
5
6
7
8
This is a test [^1]. And here is another
test [^2]. Why not add a third? [^3].

[^1]: test footnote

[^2]: footnote2

[^3]: Three Three

Unfortunately, the output isn’t quite right. My best attempt resulted in a file like this:

1
2
3
4
5
6
7
8
This is a test [^1] And here is another te[^2])). Why not add a
t[^3]ree)).

[^1]: ((test footnote))

[^2]: ((footnote2))

[^3]: ((Three Three))

Ugh.

So I am turning to the tiny slice of my readership that might actually know Python or just code in general to help me out. Where did I screw up? The source to my Python script is below so feel free to comment here or on this Gist. I am particularly frustrate that the regex appears to be capturing the parenthesis, because that’s not how the same code behaves on PythonRegex.com.

If anyone can help me with the next step, which will be creating arguments that will understand an input like *.rst and set the output to creating a file that’s *.md, that would be appreciated as well.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import re

p = re.compile("\(\(([^\(\(\)\)]+)\)\)")
file_path = str(raw_input('File Name >'))
text = open(file_path).read()

footnoteMatches = p.finditer(text)

coordinates = []
footnotes = []

# Print span of matches
for match in footnoteMatches:
    coordinates.append(match.span())
    footnotes.append(match.group())

for i in range(0,len(coordinates)):
    text = (text[0:coordinates[i][0]] + '[^' + str(i+1)+ ']' +
            text[coordinates[i][1]+1:])
    shift = coordinates[i][1] - coordinates[i][0]
    j = i + 1
    while j < len(coordinates):
        coordinates[j] = (coordinates[j][0] - shift, coordinates[j][1] - shift)
        j += 1

referenceLinkList = [text 1="'
'" language=","][/text]
for i in range(0, len(footnotes)):
    insertList = ''.join(['\n', '[^', str(i+1), ']: ', footnotes[i], '\n'])
    referenceLinkList.append(insertList)

text = ''.join(referenceLinkList)

newFile = open(file_path, 'w')
newFile.truncate()
newFile.write(text)
newFile.close()

Update with solutions:

I am happy to report I now have two working solutions. The first one comes courtesy of James Blanding who was kind enough to fork the gist I put up. While I was hoping to take a look tonight at his fork tonight, Github was experiencing some downtime.  So I ended up fixing the script myself a slightly different way (seen below). I think James’s approach is superior for a few reasons, not the least of which was avoiding the ugly if/elif/else found in my code by using a global counter. He also used .format() a lot better than I did, which I didn’t know existed until I found it tonight.

I made two other changes before coming to my solution. First, I realized my regex was completely wrong. I didn’t want to capture anything within the two parenthesis when no parenthesis were contained, as the original regex did. Instead, I wanted to make sure to preserve any parenthetical comments contained within my footnotes. So the resulting regex looks a bit different. I also switched from using user input to taking in the filepath as an argument.

My next step will be to learn a bit more about the os module which seems to contain what I need so that this Python script can behave like a good Unix script and know what to do with one file or a list of files as a parameter (and of course, most importantly, a list generated from a wild card like *.rst). I will also be incorporating the bits of James’s code that I feel confident I understand and that I like better.

Without further ado, my solution (I updated the gist as well):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
 from sys import argv
import re

name, file_path = argv

p = re.compile(r"[\s]\(\((.*?[)]{0,1})\)\)[\s]{0,1}")
# The tricky part here is to match all text between "((""))", including as 
# many as one set of (), which may even terminate ))). The {0,1} captures as
# many as one ). The trailing space is there because I often surrounded the 
# "((""))" with a space to make it clear in the WordPress editor.

# file_path = str(raw_input('File Name >'))
text = open(file_path).read()

footnoteMatches = p.finditer(text)

coordinates = []
footnotes = []

# Print span of matches
for match in footnoteMatches:
    coordinates.append(match.span())
# Capture only group(1) so you get the content of the footnote, not the 
# whole pattern which includes the parenthesis delimiter.
    footnotes.append(match.group(1))

newText = []
for i in range(0, len(coordinates)):
    if i == 0:
        newText.append(''.join(text[:coordinates[i][0]] +
                               ' [^{}]').format(i + 1))
    elif i < len(coordinates) - 1 :
        newText.append(''.join(text[coordinates[i-1][1]:coordinates[i][0]] +
                          ' [^{}]').format(i + 1))
    else:
        newText.append(''.join(text[coordinates[i-1][1]:coordinates[i][0]] +
                          ' [^{}]').format(i + 1))
        # Accounts for text after the last footnote which only runs once.
        newText.append(text[coordinates[i][1]:]+'\n')

endNotes = []
for j in range(0, len(footnotes)):
    insertList = ''.join(['\n','[^{}]: ', footnotes[j], '\n']).format(j + 1)
    endNotes.append(insertList)

newText = ''.join(newText) + '\n' + ''.join(endNotes)

newFile = open(file_path, 'w')
newFile.truncate()
newFile.write(newText)
newFile.close()
November 19, 2012

It is so tempting to try to apply cognitive science results in education. It seems like an obvious step on the long road of moving education from a field of theory and philosophies to one more grounded in empirical research. Yet, learning myths are persistent. Even scarier, “those who know the most about neuroscience also believe the most myths.

Educators may have the best intentions when trying to infuse their practice with evidence, but they all too often are not equipped as critical consumers of research. Worse, the education profession has historically been wrapped in “thoughtworld” 1, where schools of education have taught same ideas about effective teaching and learning for decades without a basis in empirical research. These same ideas are taught to principals, district administrators, and teachers, so nary a critical voice can stop the myths from being repeated and mutually reinforcing each other.

Effectively conducting empirical research, translating research for policymakers, and implementing research-based program design is my job. I came to education purely from a research and policy perspective, and I am equipped to understand some of the empirical research done on effective schooling 2.

I have to confront an awful history of “outsiders” like myself who have brought round after round of poorly supported, poorly evaluated reforms. I have to confront the history of districts and schools discarding some very effective programs because of leadership changes, lack of resources, and most of all a lack good, systematic evaluation of programs. And I have to be damn good at what I do, because even a small misstep could paint me just like every other “expert” that has rolled through with the newest great idea.

I think this is why I tend to favor interventions that are very small. Simple, small, hard to “mess up” interventions,  based in research, implemented just a few at a time have tremendous potential. I love the oft-cited work on filling out the FAFSA along with tax filing at H&R Block. It is simple. There is no fear of “dosage” or implementation fidelity. There are both sound theoretical reasons and empirical results from other domains that suggest a high likelihood of success. It has the potential to make a huge impact on students without adding any load to teachers who are, say, implementing a brand new and complicated curriculum this year. This is how you earn trust through building success.

I am also a fan of some really big, dramatic changes, but how I get there will have to be the subject of a future post.


  1. E.D. Hirsch’s term ↩︎

  2. In the area of neuroscience and cognitive science, I am probably only marginally better off than most teachers. My Sc.B. is in chemistry. So a background in empirical physical sciences and my knowledge of social science may help me to access some of the research on how people learn, but I would probably be just as susceptible to overconfidence in my ability to process this research and repeat untruths as many very intelligent educators. ↩︎

November 3, 2012

Apple has released the iPad Mini. Microsoft unveiled the Surface RT. Google has expanded its play with the Nexus 4 (phone) and 10 (tablet) to sandwich the previously released 7. In virtually every review of these new devices the Apple advantage was ecosystem.

Time and time again, following descriptions of well designed and built hardware 1, reviewers were forced to add some form of, “But the ecosystem cannot compete with Apple’s 275,000 tablet-optimized application.” I think this understates the power of Apple’s amazing developer advantage.

I use three distinct computing platforms every day: my phone, my tablet, and my traditional PC (laptop and desktop). There are times where I use an application which is specific to one platform or the other. Dark Sky, for example, is incredibly useful on my iPhone but would be pretty pointless on my Mac Mini or Macbook Air. This kind of platform-specific, quality application is what most would consider the App Store advantage. Not me.

Apple’s true advantage is when applications are available across all three platforms, offering simultaneously a device-optimized and consistent experience no matter what I am using.

They offer a frictionless experience.

There is a good reason people were so excited for Tweetbot for OSX and love to use Reeder on iPhone, iPad, and OSX. The features, feel, gestures, and even notification sounds having consistency across environments makes it easier to use computers. The so-called “halo effect” of the iPod was widely discussed in the early 2000s. iTunes on every Windows machine represented the tail end of a long play that pushed the promise of frictionless computing with Apple products. iOS delivers on this promise in spades.

Google knows a big selling point of Android is offering the best mobile experience with their web products. As an early and voracious user of Gmail, Google Contacts, and Google Calendar, I do find this enticing. But Android apps are never going to be able to offer the frictionless experience offered by Apple across the mobile and desktop space. ChromeOS is Google’s best effort to push a frictionless platform, but it’s entirely limited to non-native applications so anything but Google products require major modifications and just won’t be the same.

Microsoft sees the Apple advantage clearly, and they understand Google’s inability to fully compete. That’s why they are launching Windows 8, in many ways attempting to even further integrate the tablet and desktop than Apple. The Surface, and Windows 8 writ large, is a bet that Apple made a mistake grouping tablets with cell phones. The tablet, according to Microsoft, is about replacing laptops and should be grouped with the desktop.

I think this is a smart play, regardless of some of the rough reviews of both the Surface RT and Windows 8. Version 1 has some awkward transitions on both devices, but that may be worth the cost to take advantage of a near-future where the power available on a large tablet will be comparable to that of a laptop or even desktop computer. Just as the Macbook Air is every bit as good a consumer computer as “the fatter” laptop market, soon tablets will be every bit as good a consumer computer that exists. Microsoft’s bet is that with that power will come more sophisticated and complex uses, better suited to applications at home on the desktop. They are betting the future is the past– a full multitasking enabled, file-system revealing environment. If that’s what users will eventually want from their tablets, Windows 8 will have these capabilities baked in from the start while iOS struggles to pump out new features and APIs to mimic (or create) these capabilities.

The future is frictionless. Apple’s true advantage is they can already offer one version of that future. If Microsoft plays its cards right, and if it is not too late 2, they can offer an equally compelling alternative. It won’t win over the real, dyed-in-the-wool Apple fans, but it may stem the tide carrying the consumer market swiftly away.


  1. Hardly a given in the past from either Google (LG/ASUS) and Microsoft partners. Although Microsoft’s actual hardware, until now primarily keyboards and mice (do you pluralize a computer mouse? It seems strange.) ↩︎

  2. I really think it might be. Windows Phone 7 was brilliant, but released 2 years too late behind at least 1 year of development. ↩︎

November 1, 2012

I like this piece in Slate on Paul Cuffee Middle School, a charter school right here in Providence. Most of what I know about child development seems to suggest that middle schools are sort of ridiculous. At the moment children are looking for role models and close relationships with adults (and not just the kids around them), we decide that kids should have many teachers, teachers should have higher loads, and the kids stay consistent while the adults change constantly.

In many ways, the elementary school model works better for middle school students and vice versa.

Anyway, some research showing K-8 schools have a built-in advantage against the traditional middle school:

The Middle School Plunge Stuck in the Middle 1


  1. A more “popular” version on Education Next here↩︎

October 8, 2012

I have been meaning to write this post for the past couple of weeks. Like most other people, I am constantly experimenting with different ways to publish and share my thoughts and engage with social networking. Lately, I have settled into what feels like an “end state” workflow1. I will devote a future post to the details of how I manage my online reading, writing, and sharing workflow but for now I just wanted to let folks know where they can find me.

For random thoughts throughout the day I mostly turn to my Twitter account or increasingly my App.net account. I am a retweet abuser, so if you follow me there be warned. I often just retweet things I find funny or interesting, write some random complaint about coding, policy, or education when I’m frustrated and don’t understand the world, and try syndicate some of the other sources I’ll list here. I also like to talk to people on Twitter, so if you’re looking for conversation that’s the place to go. I almost use it like it’s the new IRC/AIM Chatroom. My Twitter account is a bit more Providence/Rhode Island heavy than most other ways to follow me.

Some of you may know that I also have a Tumblr that has fallen in and out of favor. I used to blog over there before creating this Wordpress site2. Recently, I have used my Tumblr account much more. Since Google Reader removed its social features I have tried to find the best way to share the best stuff I read each day with a few thoughts. I toyed with Google Plus, but it really is dead. I don’t find good content there and engagement with my sharing has been very low. Also, the lack of a write API makes it very challenging to incorporate in a non-disruptive way.3So right now, head over and follow my Tumblr (natively or RSS) if you want to get 5-10 link posts each day of things I’ve collected across the web. Some of my favorite online friends found me through my Google Reader sharing and I suspect that they would enjoy my Tumblr most of all. If I start getting more engagement around what type of links folks are enjoying I can begin to shift the topics I post on. I collect many more links than what end up in Tumblr in Google Reader and Pinboard. I have a very specific path to end up in Tumblr that leans more towards long reads and shares from friends and not what I am watching on RSS.

A few months ago I ditched my original Facebook account from 2005 and reopened a fresh one. I did this for two reasons: 1) I had collected many friends that I was not truly in contact with. Because of the layers and layers of privacy changes that Facebook went through, it became very difficult to maintain settings I was comfortable with. I wanted to start fresh with friends and fresh with how I manage privacy. 2) Related to 1, I never used Facebook as a networking tool. To me, it was always supposed to be a way to interact and keep in touch with friends from “real life”. Ultimately, I didn’t find that aspect of Facebook to be all that valuable. So I’m trying to be a believer and use Facebook more like I use other social media–  a way to tap into my “interest graph” and meet new people and read new things and have new conversations. You can follow me there with a few caveats. I hate using Facebook, so it is probably going to have the least content. There will still be some personal stuff as most of my friends still see Facebook as an intimate space, shocking though that may seem. Finally, I may not friend you back. Yes, the point of this account is to be more open, but Facebook still creeps me out and on any given day I may feel more or less incline to be open on there.

This blog will remain where I write longer pieces that are primarily “original” analysis/thoughts and less news/broadcast-like. I hope to share a lot more code and thoughts on current research in the near future now that I’m changing jobs.


  1. Subject to change, but I’m betting it’s more tweaks at this point than dramatic shifts ↩︎

  2. and I really want to leave Wordpress, but that is going to be a big project ↩︎

  3. Definitely more on this in my future workflow post ↩︎

September 30, 2012

I have not had the opportunity to read Paul Tough’s newest book on “grit”1. I have, however, read Paul Tough’s New York Times Magazine article on grit and recently listened to an EconTalk podcast where he discussed How Children Succeed.

The thrust of Tough’s argument, if I were to be so bold, is that there is a definable set of non-cognitive skills, called “grit”, that are at least as important as academic achievement in determining long-term positive outcomes for kids. Great schools, therefore, would do well to focus on developing these habits as much and as intentionally as they do developing content knowledge and academic prowess. This, according to Tough, is a big part of the “magic sauce”2of “No Excuses” schools like KIPP. They teach “grit” as a part of their intense behavioral management and culture efforts.

I think Tough is an engaging writer and has a great knack for finding some of the most interesting research not often read in education policy circles, but which is clearly relevant. While listening to the EconLog podcast I found myself often disagreeing with his interpretations/conclusions. But more often, I found myself desperately wishing for a different, slower format because so much of this work begged deeper questioning and conversation. What better reason could there be to buy and read a book-length treatment of these ideas?

Anyway, I thought I’d share just a few of my thoughts on “grit” based on this interview and the earlier New York Times Magazine piece.

Teaching conscientiousness in a society that has been so unconscientious

It seems fairly obvious that people who don’t “play by the rules” and aren’t easily motivated to conform to certain habits are less likely to be successful. It is unsurprising that Tough finds research that suggests that there is a “grit” gap between rich and poor. I want to know more about why, and I have, what I hope, is one interesting idea of what contributes to the “grit gap”.

I believe that deterioration of the built environment, especially among the urban and truly rural poor, is a major contributor to low grit. Some parts of this country with high concentrations of poverty look– bombed out. Roads are littered with deep potholes and scars. The houses have chipped paint, rotting wood exterior elements, and unkept yards. Storefronts were built decades ago on the cheap, aged poorly, and were never updated. Their schools lack good lighting, decent HVAC systems, and functioning toilets. There is no pride found in any of these spaces.

Children growing up in poverty do not see neighbors obsessing over their lawn. They do not watch one house after another repaint and reface their exteriors to ensure they weren’t the ugliest house on the block. They do not see brand new cars, fresh asphalt roads, and schools that resemble palaces. I don’t think virtually any of this has to do with the people who live in these neighborhoods. I do think it reflects the pathetic state that society has deemed acceptable, so long as it remains sight unseen by those with resources.

Growing up in poverty often means being surrounded by spaces that society has left to rot. How can these children learn conscientiousness when the privileged have been so unconscientious?

The M&M Study

Tough mentions a study where students first take an IQ test under normal conditions. These same students are then given an IQ test but are rewarded with an M&M each time they get a question right. This tiny immediate incentive resulted in a massive, 1.8SD improvement in mean IQ. 3 The implications are fascinating. It demonstrates the importance of motivation even while taking a test that is supposedly measuring an intractable, permanent attribute people have. This seems obvious and is fairly well known, but forgotten in many policy circles. I have often lamented that the New England Common Assessment Program has a sizable downward bias when measuring achievement because the exam is low stakes for students. The dramatic decrease in performance observed on the 11th grade NECAP math exam is almost certainly in part due to lower intrinsic motivation amongst high school students compared to their 8th grade and younger selves.

There are some students that have no measurable response to the M&M incentive. These students are exhibiting qualities of Tough’s “grit”, conscientiousness that leads one to do well simply because they are being measured, or perhaps because there is no reason to do something if it is not going to be done well. I believe that there is also a bias against schools with concentrated poverty because of an uneven distribution of “grit”– suburban middle to upper class students with college ambitions will likely be the students who will sit down and try hard on a test just because they are being measured whereas urban students living in poverty are far less likely to exert that same effort for an exercise with no immediate or clear long-term consequences.

All of this would be pretty blasé were it not for the more distal outcomes observed. The group of students that did not respond to the M&M incentives had significantly and practically better outcomes than those that responded to the incentive. I can’t recall exactly which outcomes were a part of this study, but Tough cites several independent studies that measure a similar set of qualities and find far better outcomes with GPA, graduation from high school, post-secondary degree attainment, juvenile delinquency or adult criminal activity, and wages.

Tough’s interpretation of these results seems to mirror my feelings on grading. Low stakes testing (or in this case, no-incentive testing) has omitted variable bias which leads to observing students who lack “grit” as lower achieving than they are. The test results are still excellent predictors of later success but lack validity as a pure measure of academic achievement. My complaint about grades that use behavior, attendance, and participation4does not stem from their lack of validity at predicting later outcomes. These grades are excellent predictors of later outcomes. Rather, it stems from these grade conflating two very different qualities into a single measure, making it far more difficult to design appropriate interventions and supports that target individual needs.

Tough seems thinks this means that high stakes placed on test scores over emphasizes one quality over the other when both are very important. I disagree. I feel that high stakes test scores recreate the M&M incentive and leads to a better measure of academic ability. That is not to say that we don’t need to cultivate and measure non-cognitive skills. It just means that trying to measure both at once5results in less clear and actionable interpretations.

Is the “grit” problem properly described as a failure to recognize long-term benefits?

Repeatedly both Tough and host Russ Roberts point to the need to provide students who lack grit more information on the long-term benefits of “doing well”. For example, Tough cites KIPP’s posting of the economic benefits of a bachelor’s degree on walls in the halls of their schools as a way to build grit. Somewhat left unsaid is the idea that grit-like behaviors may not describe some kind of “intrinsic” motivation, but instead represent an understanding of the long-term extrinsic benefits of certain actions. Grit really means understanding that, “If I behave appropriately, I will gain the respect of this authority and earn greater autonomy/responsibility,” or perhaps, “Doing my homework each night will teach me good habits of work and help me to learn this academic material so I can succeed in college and get a better job.”

Can grit really be just a heuristic developed to better respond to long-term incentives?

I am not sure. I am equally unsure that the activities of a “No Excuses” school actually generate the long-term benefits of “grit”. If grit is a powerful heuristic to optimize long-term outcomes, how do we know that many short-term incentives that build behaviors toward academic success mean that students better respond to a broad set of long-term outcomes? Should we believe that behavior bucks/demerit systems, constant small corrections, repeatedly stating the goals of education and its benefits, and other KIPP-like culture-building strategies build a bend toward acting in ways that maximize long-term outcomes? Do students aspire to college because they have internalized its importance, or do the stack of short-term incentives build a desire for sprokets, wignuts, and widgets that just happened to be called a “bachelor’s degree” in this case?


  1. I use “grit” a lot in this post. Please insert quotes each time. It got obnoxious reading it with the quotes actually in place ↩︎

  2. My term, not his. Probably stolen from one of my colleagues who uses this term a lot. ↩︎

  3. From 79 to 97 according to EconTalk ↩︎

  4. among other non-cognitive, non-academic skills and activities ↩︎

  5. Or inadvertently measuring both at once, as many low-stakes standardized tests do ↩︎

Philip Elmer-DeWitt has suggested the iOS6 Maps debacle falls on the shoulders of Scott Forstall1. When I first read the piece, I felt like it was unfair to blame management for this kind of failure. In my experience, the Maps application is wonderful software. The turn-by-turn directions are elegant and beautiful. The vector-based maps load fast and use substantially less data. The reality is the Map app is great; the data are less so.

Building great mapping data is no easy task. It takes years. It takes human intervention. It takes users. Short of a massive acquisition of an existing player, like Garmin, there was little hope of Apple developing a great map application for day one of release. Hell, in my experience, most stand alone GPS data is pretty awful in all the ways the Apple data is awful. That’s why I primarily used my iPhone as my GPS the last few years. The experience was consistently better and less frustrating. Perhaps even more critically, Apple is just not a data company. Google is the king of data. The skills required to build great geographic data simply doesn’t map well against previous Apple competencies. None of this means that the Apple Map situation is good or even “excusable”. I just think the map situation is “understandable” and would not be with different guidance.

But then I reevaluated and realized that there is a major way that management could have improved Apple Maps for iOS. Managers should set the bar for quality, make sure that bar is met, and adjust both resources and expectations when a project is not meeting user expectations. It must have been obvious to Apple management that the quality expectations were not going to be met.

What could Forstall have done? Some have suggested thrown substantially more money at the project. Others say he should have “winked” at Apple users and clearly signaled that Maps were in their infancy. And of course there were those who said he should have waited another year for the Google Maps contract to expire. John Gruber is rather convincing that simply waiting another year was not an option. Apple really couldn’t swap maps out of iOS in the middle of the OS cycle. It would be jarring and far more frustrating than the current situation.

I would have recommended a third option.

Apple should have released iOS6 Maps as US only.

@jdalrymple what if Apple execs realized it wasn’t going well & made maps US only & world in 6-12mo. Still had Google contract time for that

— Jason Becker (@jasonpbecker) September 29, 2012

One of the major themes of the iPhone 5 release was that this was a global phone. Global LTE, with day one launches in more countries and reaches far more countries, faster, than ever before after that. In fact, the Verizon CDMA iPhone comes with an unlocked GSM radio. But mapping is hard, and that problem becomes orders of magnitude more difficult with each inch of the planet that needs to be covered. When it became clear that Apple had a beautiful application, but awful data, Forstall and the rest of Apple management should have adjusted expectations and promised a US-only release that met the quality that consumers have come to expect. This would serve to increase resources, while winking at users, and utilizing the remainder of the Google contract for international mapping. With six additional months Apple could make great strides improving international data and possibly signing some additional, high-profile maps data deals with local sources/competitors that would love to be associated with Apple, even if it is just in a footnote. US users would rave about the great vector mapping, the turn by turn directions that are brilliantly integrated into the lock screen and always provide just enough information, and the cool integration into Open Table and Yelp. US maps would get better because they would have constant users. The rest of the world would lap up iPhone 5s and wait anxiously for their chance to taste the Great Apple Maps.

In this scenario, it is possible that Apple could have had the best of both worlds: a far worse data set in an application that cost just as much, but by limiting the scope to their key market, a reputation for excellence that would lead to excitement for the end of a competitor’s product.

I am sure there were other challenges with producing a US-only2that I am not considering. I think this is at least one typical techniques in IT management that Apple could have employed for a smoother, better release of their first efforts into a complicated and competitive space.


  1. Of iOS skeumorphism fame↩︎

  2. Or North America only. There are barely any roads in Canada, right? ↩︎

September 18, 2012

Bruno is a skeptic on standards-based grading. He seems to think that “mastery of content” is too abstract for students to work toward and rightly cites evidence that motivation and changed behavior are tightly linked to a sense of efficacy, which in turn is tightly linked to feeling as though you know precisely what to do to get to a particular outcome.

But isn’t mastery of content essentially, “Do well on your assignments and tests”? And while a massive, standards-based report card may be hard for a parent to read, is it any more confusing than seeing awful results on standardized tests and a student who clearly doesn’t read on grade-level receive good grades because of participation, attendance, and behavior? As a parent, how do you know to intercede on your child’s behalf when you see a “B” which actually represents a C- on content knowledge and skills and an A+ for effort, behavior, and completion?

Ultimately, I am against including behavior, attendance, and effort as a part of the same grade as academics. I think there needs to be a clear place to present evidence of academic ability and growth independent of behavioral growth. Both are important, and while linked, are certainly not moving in lockstep for the typical child. Accurate information in both domains is far better than falsely presenting a singular, mixed-up “truth” about a child’s success in school.

For the same reason I am not a fan of school report cards with a single letter grade rating, I am not for just a single letter grade for students. Ultimately, they both represent poor combinations of data that obscure more than they reveal.

Developing report cards or “grading” systems, both for program evaluation and for students, always conjures one of the few concepts I recall from linear algebra. It seems to me that any good grading system should provide a basis, that is, a minimal set of linearly independent vectors which, via linear combination, can describe an entire vector space. Remove the jargon and you’re left with:

Measure the least amount of unrelated things possible that, taken together, describe all there is to know about what you are measuring.

A single grade that combines all the effort, behavior, attendance, and various unrelated academic standards I might get an overall description that says “round”. But by separating out the data at some other level, the picture might describe a golf ball and its dimples, a baseball and its stitches, or a soccer ball with its hexagon-pentagon pattern.

I think we need to find a way to let people know what kind of ball they have.

August 21, 2012

So I have this great little custom function I’ve used when looking at survey data in R. I call this function pull(). The goal of pull() is to quickly produce frequency tables with n sizes from individual-level survey data.

Before using pull(), I create a big table that includes information about the survey questions I want to pull. The data are structured like this:

quest survey year break
ss01985 elementary 2011\_12 schoolcode

  • quest represents the question coding in the raw survey data.
  • survey is the name of the survey (in my case, the elementary school students, middle school students, high school students, parents, teachers, or administrators).
  • year is the year that the survey data are collected.
  • break is the ID I want to aggregate on like schoolcode or districtcode.

They key is that paste(survey, year,sep='') produces the name of the data.frame where I store the relevant survey data. Both quest and break are columns in the survey data.frame. Using a data.frame with this data allows me to apply through the rows and produce the table for all the relevant questions at once. pull() does the work of taking one row of this data.frame and producing the output that I’m looking for. I also use pull() one row at a time to save a data.frame that contains these data and do other things (like the visualizations in this post).

In some sense, pull() is really just a fancy version of prop.table that takes in passed paramaters and adds an “n” to each row and adding a “total” row. I feel as though there must be an implementation of an equivalent function in a popular package (or maybe even base) that I should be using rather than this technique. It would probably be more maintainable and easier for collaborators to work with this more common implementation, but I have no idea where to find it. So, please feel free to use the code below, but I’m actually hoping that someone will chime in and tell me I’ve wasted my time and I should just be using some function foo::bar.

P.S. This post is a great example of why I really need to change this blog to Markdown/R-flavored Markdown. All those inline references to functions, variables, or code should really be formatted in-line which the syntax highlighter plug-in used on this blog does not support. I’m nervous that using WP-Markdown plugin will botch formatting on older posts, so I may just need to setup a workflow where I pump out HTML from the Markdown and upload the posts from there. If anyone has experience with Markdown + Wordpress, advice is appreciated.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
pull <- function(rows){
  # Takes in a vector with all the information required to create crosstab with
  # percentages for a specific question for all schools.
  # Args:
  #  rows: Consists of a vector with four objects.
  #        quest: the question code from SurveyWorks
  #        level: the "level" of the survey, i.e.: elem, midd, high, teac, admn,
  #        pare, etc.
  #        year: the year the survey was administered, i.e. 2011_12
  #        sch_lea: the "break" indicator, i.e. schoolcode, districtcode, etc.
  # Returns:
  # A data.frame with a row for each "break", i.e. school, attributes for
  # each possible answer to quest, i.e. Agree and Disagree, and N size for each
  # break based on how many people responded to that question, not the survey as
  # a whole, i.e. 

  # Break each component of the vector rows into separate single-element vectors
  # for convenience and clarity.
  quest <- as.character(rows[1])
  survey <- as.character(rows[2])
  year  <- as.character(rows[3])
  break <- as.character(rows[4])
  data <- get(paste(level,year,sep=''))
  # Data is an alias for the data.frame described by level and year.
  # This alias reduces the number of "get" calls to speed up code and increase
  # clarity.
  results <- with(data,
                  dcast(data.frame(prop.table(table(data[[break]],
                                                    data[[quest]]),
                                              1))
                        ,Var1~Var2,value.var='Freq'))
  # Produces a table with the proportions for each response in wide format.
  n <- data.frame(Var1=rle(sort(
    subset(data, 
           is.na(data[[quest]])==F & is.na(data[[break]])==F)[[break]]))$values,
                  n=rle(sort(
                    subset(data,
                           is.na(data[[quest]])==F &
                             is.na(data[[break]])==F)[[break]]))$lengths)
  # Generates a data frame with each break element and the "length" of that break
  # element. rle counts the occurrences of a value in a vector in order. So first
  # you sort the vector so all common break values are adjacent then you use rle
  # to count their uninterupted appearance. The result is an rle object with 
  # two components: [[values]] which represent the values in the original, sorted
  # vector and [[length]] which is the count of their uninterupted repeated
  # appearance in that vector.
  results <- merge(results, n, by='Var1')
  # Combines N values with the results table.

  state <- data.frame(t(c(Var1='Rhode Island', 
                          prop.table(table(data[[quest]])),
                          n=dim(subset(data,is.na(data[[quest]])==F))[1])))
  names(state) <- names(results)
  for(i in 2:dim(state)[2]){
    state[,i] <- as.numeric(as.character(state[,i]))
  }
  # Because the state data.frame has only one row, R coerces to type factor.
  # If I rbind() a factor to a numeric attribute, R will coerce them both to
  # characters and refuses to convert back to type numeric.
  results <- rbind(results, state)
  results
}   
August 5, 2012

I like paying for good software. There are applications I use every day, some for hours a day, that make my experience on the web and on my computers better. I have paid for Reeder on three platforms, Tweetbot on two1, Pinboard, and many others. I like to pay, because I value my time, my experience, and my productivity.

I also like to pay because I value my privacy.

Don’t get me wrong– I am a Google addict, using Gmail practically from the beginning, GChat, Google Calendar, Google+, Google Reader, etc, etc. I have a Facebook account (although I recently removed my original account from 2005). I spent quite a bit of time on Twitter. These are g are reat places to do great work and to have a lot of fun. They are key parts of my professional and personal life. All of these services, however, built around the model of selling me. They offer a real modern day example of TANSTAAFL. Nothing leaves my pocket, but massive hoards of data are used to direct advertising my way and some of that data is even sold to other companies. Knowing your customers has always been valuable, and the price of “free” is my very identity.

Now, generally I think that these major companies are good stewards of my privacy. As a budding data professional, I know just how difficult and meaningless it would be for any of these companies to truly target me rather than learn about a cloud of millions people moving at once. I also believe they realize how much of their business model requires trust. Without trust, giving up our privacy will feel like an increasingly large ask.

I value my privacy, but I value good software as well. Right now, I have not found alternatives for many “free” services that are good enough to make up for the cost of my privacy. I am a willing participant in selling my privacy, because I feel I get more value back than I am losing.

But, privacy is not the only reason I wish there were alternative services and software I could buy.

I was probably pretty sloppy in this post interchanging “software” and “services”. Many of the websites or software I mentioned are merely front ends for a more valuable service. Gmail is not the same thing as email. Reeder is actually a software alternative (and more) to Google Reader’s web-based front end for a news aggregator. GChat is just a Jabber/XMPP client. Ultimately, much of what I do around the internet is about moving structured data around between peers and producer-consumer relationships. All of the great things that made the web possible were protocols like HTTP, TCP/IP, etc. And the protocols of todays web are the standardized APIs that allow programmers a way to interact with data. Great, innovative software for the web is being built that ultimately change the way we see and edit data on these services. The common analogy here is that of a utility. The API helps users tap into vast networks of pipes and interact with the flow of information in new, exciting ways.

To get a sense of how amazing new things can be done with an API look no further than IFTTT. It is like a masterful switching station for some of the most useful APIs on the web. Using Recipes on IFTTT, I can do something amazing like this:

  1. Find a really cool link from a friend on Twitter.
  2. Save that link to Pinboard, a great bookmarking site, with tags and a description so that I can find it later easily.
  3. Add tags to the Pinboard bookmark for the social sites I want to share on, e.g. to:twitter to:facebook to:linkedin to:tumblr, all of which are special tags that I use with the IFTTT API.
  4. IFTTT, which is linked to my Pinboard account, looks occasionally to see any recently saved links. It finds a new link with those special tags (called Triggers).
  5. Each of those special tags tells IFTTT to make a new post on a different social networking site sharing my link (sometimes with tags, sometimes with the description, sometimes with nothing, all of which I set up) seamlessly without any user interaction.
  6. My cool link gets sent strategically where I want it to be sent without every leaving the site. I just clicked one button and added the right tags.

This kind of interaction model is impossible without agreed upon standards for sites to read and write information to one another.

The open API which made it so easy to innovate quickly from the outside– Facebook’s Platform, the Twitter API, etc– is under a serious existential threat. The truth is, these darlings of Web 2.0 don’t have a great idea about how to make money. The free web has almost entirely depended on advertising revenues to turn a profit. But how can these companies make money if I’m not using their webpage or their website to get access to my data?

Do you see the part that I slipped in there? These companies have lost site of one very important part of the equation– the content was free because the users created it. Its our data.

Twitter seems to be on the verge of removing or limiting critical portions of the their API at the expense of many developers building new ways to interact with Twitter data, and, more importantly, all of their users who have joined Twitter because it was a powerful platform, not just a fun interactive website. Their tumultuous corporate culture has landed here because they decided that the promise of big revenues for their investors is not enhanced by people accessing Twitter through unofficial channels. Facebook has made similar moves in light of its short, but disastrous history as a public company.

If things shake out the way they seem to be, the sexy startups of Web 2.0 will turn away from the openness conducive to gaining users as they mature. These sites will consolidate and limit the experience, pushing for more page views and time on their site by making it hard to leave. They are rebuilding America Online2, trying to make it so that their webpage becomes synonymous with “the Internet” for their users. Want your ads to be worth more money? Make it hard to change the channel.

It is for this reason that I am supporting App.net. The commitment is a little steep, until you consider how valuable these services have become. For the cost of one pretty nice meal out with my girlfriend, I am purchasing one of the best ways to communicate on the web. I am supporting a model for good software that means that user experience and needs are paramount. I am purchasing customer service because I am sick of ad companies being the customer while I am stuck as the product. I am paying for access so that there is a large competitive market vying for the best way for me to interact with my data. I am paying because I am ready, no desperate, for great consumer software and services that live and breathe to optimize my experience. I used to trust the free web for this, but their business model and their success means they don’t need me as much as they need their advertisers anymore.

Please join me in supporting App.net. Even better, please join me in finding ways to buy great software to support the products make our lives more fun and our work more efficient and productive3.

This is the path to a successful Web 3.0.


  1. almost certainly three when they are out of alpha with their OSX application ↩︎

  2. Facebook, especially in my opinion ↩︎

  3. and please choose and support FOSS solutions with your time, labor, and/or money ↩︎

July 11, 2012

Ted Nesi was gracious in offering me a guest spot on his blog, Nesi’s Notes this week to discuss education funding in Woonsocket. The main conclusions of my post are:

​1. Woonsocket has not increased local funding for education over the last fifteen years despite massive increases in education expenditures in Rhode Island and nationwide.

​2. General education aid from the state has rapidly increased over the same period, demonstrating that a lack of sufficient revenue at Woonsocket Public Schools is first, if not exclusively, a local revenue problem.

I wanted to provide three additional bits of information on my personal blog.

First, I want to outline some analyses that I have not done that I think are critical to understanding education funding in Woonsocket. I will also describe more completely what conclusions cannot be drawn from the analysis on Nesi’s Notes.

Second, I want to discuss the legal context of school funding in Rhode Island. This is especially interesting since Pawtucket and Woonsocket are both currently suing the state for additional funds for the second time. I am going to review what happened the first time these communities brought their fight for education aid to the courthouse and explain why I believe this strategy will fail once again.

Third, I want to provide instructions on precisely how I retrieved the data and created the graphs in that post. I am a firm believer in “reproducible research”, so I want to be entirely transparent on my data sources and methods. I also think that too few people are acquainted with the Common Core Data provided by the National Center for Education Statistics that I relied on exclusively for my guest blog. Hopefully these instructions will help more concerned citizens and journalists in Rhode Island use data to back up assertions about local education.

Please reserve your comments on my original posts for Nesi’s Notes. I have disabled comments on this post, because I would like to keep the comments on the original analysis contained in one place. Feel free to comment on each of the follow up posts.

My last post ended with an important question, “Who is responsible for ensuring students are receiving a certain minimum quality education?”

This is my attempt at answering that question.

Does the state have a legal obligation to fiscally ensure that Woonsocket students are receiving an equitable, adequate, and meaningful education? San Antonio v. Rodriquez, a landmark Supreme Court case decided in 1973 determined that there was no fundamental right to education guaranteed by the U.S. Constitution. Since that decision, advocates for fairer education funding have focused their efforts in state courts arguing over provisions in state constitutions that include some rights to education.

In Rhode Island, the City of Pawtucket v. Sundlun in 1995 tested Article XII of the state constitution which stated “it shall be the duty of the general assembly to promote public schools…”. In this case, East Greenwich, Pawtucket, and Woonsocket sued the state claiming that the duty to promote public schools amounted to a guarantee of equitable, adequate education funding from the state, a burden not met by the current General Assembly education aid distribution.

I am not a legal expert, but I find the conclusions of the Supreme Court abundantly clear. In Pawtucket, the court decides to overturn a Superior Court decision which had earlier ruled that the state constitution guaranteed each child, “receive an equal, adequate, and meaningful education.” Pawtucket finds that the General Assembly’s responsibility to “promote” as “it sees fit” (emphasis added in the original decision) is quite narrow; the General Assembly clearly has the power to determine how to “promote” education, it has historically used that power in a way that relied on local appropriations to education, and the courts do not even have a judicable standard1to determine the General Assembly has failed to “promote” education.

The current lawsuit asserts two things have dramatically changed since Pawtucket that justify a second look and new ruling2. First, one portion of the state constitution has recently been changed that was used in the prior ruling. The Supreme Court’s decision stated:

Moreover, in no measure did the 1986 Constitution alter the plenary and exclusive powers of the General Assembly. In fact, the 1986 Constitution provided that:

“The general assembly shall continue to exercise the powers it has heretofore exercised, unless prohibited in this Constitution.” Art. 6, sec. 10.

Essentially, the judge stated that this section of the state constitution meant that the legislature was retaining the right to exercise its powers as it had historically. In the case of education, this means “the power to promote public education through a statutory funding scheme and through reliance on local property taxation,” in accordance with the findings in the decision. However, Article 6, section 10 of the state constitution has subsequently been repealed. It is worth repeating what I said previously, I am not a legal expert. However, I find the argument to overturn Pawtucket on the basis that the General Assembly is no longer expressly continuing to exercise their power as previously to be weak. My understanding of the Pawtucket ruling is that the court had only strengthened importance of historical context in making this decision by leaning on this constitutional provision. The importance of historical context still remains, even without this provision. In the Pawtucket decision, the “exercise of powers it has heretofore exercised” is interpreted to mean that unchanged constitutional language reflects unchanged powers. By maintaining the same language in 1986, despite amendments offered that would have more explicitly established a right to education, the General Assembly was, in effect, affirming its intent to continue to promote education as it had in the past. The plaintiffs in the current case, presumably, will argue that without Article 6, section 10, the General Assembly is allowing the courts to reinterpret even the same language to imply a different set of rights and responsibilities than it has historically. I have to ask, if the General Assembly’s intent was to signal that Article XII should now be interpreted as establishing a right to education, why wouldn’t they have adopted new, clearer language as was proposed in 1986? Having full awareness of the decision in Pawtucket, it is hard to see that the General Assembly would signal a change in its power and responsibility to promote education through a repeal of Article 6, section 10. I would assert this change simply shifts some of the burden to the finding that the General Assembly sees fit the promot[ion] of some judicable standard right to education that is the state’s fiscal responsibility.

This is the critical piece that the plaintiffs will not find. Nowhere has the General Assembly exercised its power to *promote *in this way. In fact, one only has to look at how the General Assembly has acted to establish a judicable right to education to observe precisely how it sees fit. Rhode Island General Law 16-7-24, titled “Minimum appropriation by a community for approved school expenses,” is a provision that all school committees are quite familiar with. Here, the General Assembly do establish a judicable standard for education, set by the Board of Regents of Elementary and Secondary Education in regulations known as the “basic education program”. But where Pawtucket fails to establish a constitutional guarantee for state funding in a particular amount for education, Rhode Island statute is quite clear on a minimum standard for local support. The law states that “Each community shall appropriate or otherwise make available… an amount, which together with state education and federal aid… shall be not less than the costs of the basic program… The Board of Regents for Elementary and Secondary Education shall adopt regulations for determining the basic education program…” In other words, Rhode Island statute squarely places the burden for meeting the Basic Education Program on cities and towns raising the required revenue. “A community that has a local appropriation insufficient to fund the basic education program … shall be required to increase its local appropriation…”

It seems pretty clear to me. While the plaintiffs in the current case will presumably argue that state regulations and laws do represent a judicable standard, they will be unable to find where the General Assembly, through action, has affirmed that it is the role of the state aid to meet this standard. Instead, the law directly states that local appropriations are to be increased if the Basic Education Program cannot be met. I cannot imagine that the Supreme Court would exercise its power to assert that the General Assembly’s inaction implies more about the purpose of unchanged constitutional language than the General Assembly’s actions.

In summary, although the city is again suing the state for additional education aid, it is clear in the last 15 years that the state has substantially increased its support for Woonsocket Schools3. Furthermore, previous Rhode Island Supreme Court decisions and Rhode Island law clearly places the burden of adequate school funding squarely on the shoulders of cities and towns, not the General Assembly. In my view, the changes in education law and policy since Pawtucket do not imply a change that would impact the court’s ruling.

This post is the second post of a three-part follow up on my guest post for Nesi’s Notes. Parts I and III can be found here.


  1. meaning measurable and enforceable by court room activities ↩︎

  2. Note: I have not read the complaint as I probably should have for this post. I ran out of time. However, I feel fairly certain from press coverage that I am correctly stating their main points ↩︎

  3. See my post on Nesi’s Notes ↩︎

There are several questions that come to mind when looking over my analysis on Nesi’s Notes. The first thing I wondered was whether or not Woonsocket had raised local revenues by similar amounts to other communities but had chosen to spend this money on other municipal services. Ideally, I would use a chart that showed local education revenues compared to all other local revenues over the last 15 years by city in Rhode Island. Unfortunately, Municipal Finance does not separate local, state, and federal revenue sources in the Municipal Budget Survey so it is hard to knowhow communities have funded different services. I am sure with a bit of finagling, I could come up with a fairly good guess as to whether or not Woonsocket has simply chosen to fund other municipal services with its taxes, but quite frankly it is not precise enough to make me feel like its worth the exercise of extracting data from PDF tables. I hope someone else will take up some form of this analysis, possibly by requesting the breakdowns from Municipal Finance.

Another consideration is whether there is any truth to Woonsocket’s claims that it simply does not have the ability to generate enough local revenue for their schools. I am skeptical on this claim. Three pieces of evidence suggest to me that this may not be true.

  1. The magnitude of the shortfall between the rest of the state and Woonsocket over the last 15 years when it comes to local education revenue. On its face, I don’t find it credible that Woonsocket’s tax base is so weak that it could not increase local revenues for schools even at the rate of inflation. Not increasing local revenue for schools seems to leave only two possibilities: 1) local revenues in general were not increased, meaning Woonsocket would have to argue that its taxation in FY95 was so high relative to everyone else that it took nearly 15 years for the rest of the state to catch up, hence no additional revenues; or 2) Woonsocket did raise local revenues, and chose to spend the money elsewhere. Had Woonsocket’s local education aid risen 65-75% versus a state average of around 100%, I probably would not have even written my post on Nesi’s Notes.
  2. Andrew Morse‘s analysis presented on Anchor Rising.1 Woonsocket appears to be on the low to typical end of revenues as a proportion of non-poverty income. It does not seem that they are anywhere near the “most” taxed city or town by this measure. I am not an expert on tax policy, but this measure seems fairly straightforward, fair, and informative.
  3. The mammoth proportions of Woonsocket’s budget being spent on pensions (through debt service) and other post-employment benefits. A full 15% or so of Woonsocket’s local revenues are being spent in these areas. This suggests to me misappropriation and poor planning has led to the erosion of local support for schools, not a lack of revenue generating capacity. If this truly is the case, then Woonsocket residents are really in trouble. Their leaders have managed to generate all of the high costs and high taxes experienced in Rhode Island without providing the quality of service that should be expected given those investments.

Of course, I failed to offer any recommendation for remedy in the Nesi’s Note post. How should Woonsocket schools become “whole” again? How can this possibly be accomplished in the context of a city on the brink of financial failure? Who has the legal responsibility to ensure that Woonsocket’s children get the education they deserve? I have no answers on the first two points. However, in the next section of this post I hope answer the last question, which is also the subject of a law suit filed by Pawtucket and Woonsocket against the state of Rhode Island.

Who is responsible for ensuring students are receiving a certain minimum quality education?

This post is the first of a three-part follow up on my guest post for Nesi’s Notes. Parts II and III can be found here.


  1. Andrew has been writing quite a bit about Woonsocket. For his most recent post, Andrew demonstrates Woonsocket has the fourth lowest revenues from residential taxes as a proportion of community wealth. A few things I’d like to point out on that post. First, I think Andrew was right to adjust for poverty in previous posts in a way he was unable to due to the structure of the new data. I support progressive taxation, so I don’t believe that it is fair to say that we should expect the same percentage of income tax from poorer communities that we do from wealthier ones. I also think that commercial taxes are very important revenue sources. I don’t think they should be universally dismissed when used as a substitute from residential revenues. There are times where marginally the greatest benefit can be had by lowering residents’ taxes. However, I do think that commercial tax should not be used as a substitute when there isn’t enough revenue in the pie. In Woonsocket’s case, it seems pretty clear they needed both the residential and commercial taxes to have sufficient revenues. ↩︎

My analysis on Nesi’s Notes depended entirely on the National Center for Education Statistics’ Common Core Data. The per pupil amounts reported to NCES may look a bit different from state sources of this information. There are several explanations of this. First, the enrollment counts used to generate per pupil amounts are based on an October 1st headcount. In Rhode Island, we use something called “average daily membership” (ADM) as the denominator and not a headcount. The ADM of a district is calculated by taking all the students who attended the district at any point in the year and adding up the number of school days they were enrolled for. The total membership (i.e. all the student*days, for those who like to think about this in units) is divided by the number of school days per year, almost always 180 (so student*days / days/year = students/year). Additionally, NCES does not record the final three digits on most financial data. These rounding issues will also make the per pupil data seem different from state reports.

I wanted to use the NCES to make sure that the data in my post was easily reproducible by any member of the public. I also thought using NCES would serve as a great learning opportunity for the wonks and nerds out there who never even realized how much rich data about schools and school finance are available through the federal government. That being said, I do believe that the state reported numbers are far more accurate than those available from the federal government. That is not to say that the federal data is bad. On the contrary, that data is substantially vetted and validated and is very useful for research. My concern was only that some of the tiny differences in the NCES data that deviated from what I would consider to be ideal data might reach the level where they affected the validity of the conclusions I wanted to draw.

Although I was writing as a private citizen without the support of the Rhode Island Department of Education, I did use my access to RIDE data to ensure that differences in the federal reports were not significant enough to call into question my analysis. I found that both the direction and magnitude of all the trends that I describe in the Nesi’s Notes post held up with the state data. While all of that information is publicly available, it is less easily accessible than NCES data and doesn’t provide the same opportunity for analysis outside of financial data. For these reasons, I decided to stick with NCES.

So how do you reproduce the data I used?

First, go to  the NCES Common Core Data Build a Table site. On the drop down, select “District” as the row variable and select the last fifteen years excluding 2009-10 (since there is no current financial data available for that year).

1

After clicking next, hit “I Agree” on the pop-up.

2

Now select “Finance Per Pupil Ratios” for the first column.

3

Click the green arrow that selects all years for local sources per student and state sources per student.

4

Click “Next>>” on the top right. Now select only RI-Rhode Island for your row variable.

5

Finally, click view table to see the results. I recommend downloading the Test (.csv) file to work with.

6

And finally, here’s the R code to reshape/rejigger the data I used and produce the graphics from the Nesi’s Notes post.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
## Using NCES data to analyze education finances to Woonsocket over 15 years.
## Initialize required packages
require(plyr)
require(reshape2)
require(ggplot2)
require(scales)
## Best to ignore this function-- it's mostly magic to me too. Essentially,
## multiplot takes in a bunch of plots and then puts them into one image
## arranging them by columns equal to a paramter cols. Credit to:
## [wiki.stdout.org/rcookbook...](http://wiki.stdout.org/rcookbook/Graphs/Multiple%20graphs%20on%20one%20page%20()    ggplot2)/
multiplot <- function(..., plotlist=NULL, cols) {
  require(grid)
  # Make a list from the ... arguments and plotlist
  plots <- c(list(...), plotlist)
  numPlots = length(plots)
  # Make the panel
  plotCols = cols                          # Number of columns of plots
  plotRows = ceiling(numPlots/plotCols) # Number of rows needed, calculated from #    of cols
  # Set up the page
  grid.newpage()
  pushViewport(viewport(layout = grid.layout(plotRows, plotCols)))
  vplayout <- function(x, y)
    viewport(layout.pos.row = x, layout.pos.col = y)
  # Make each plot, in the correct location
  for (i in 1:numPlots) {
    curRow = ceiling(i/plotCols)
    curCol = (i-1) %% plotCols + 1
    print(plots[[i]], vp = vplayout(curRow, curCol ))
  }
}
## Load data from the modified CSV. I made the following changes from the NCES
## downloaded file: 1) I removed all of the description header so that row one
## of the CSV is the attribute names; 2) I pasted the transposed state values
## to the final observation so that I have a state observation row analogous to
## the other LEA rows.

raw_data <- read.csv('rawdata.csv')
## Change name of first column to make things easier for later.
names(raw_data)[1] <- c('distname')
## Creating Time Series Data for each community of interest.
## I'm going to use a custom function to automate the steps required to create
## district level data in a time series.

create_ts <- function(name){
  # First create a column vector with the local funding
  # A few things to note: First, t() is the transpose function and helps to
  # make my "wide" data (lots of columns) "long" (lots of rows). Second, R
  # has a funny behavior that is very covenient for data anaylsts. It performs
  # many common mathematical operations element-wise, so the simple division
  # of two vectors below actually divides element by element through the
  # vector, e.g. column 17 is divided by column 2 to provide the first element
  # in the resulting vector. This makes calculating per pupil amounts very
  # convenient.
  local <- t(subset(raw_data,distname==name)[,c(17:31)]/
             subset(raw_data,distname==name)[,c(2:16)])
  # Performing the same operation for state per pupil amounts.
  state <- t(subset(raw_data,distname==name)[,c(32:46)]/
             subset(raw_data,distname==name)[,c(2:16)])
  # Putting state and local data together and getting rid of the nasty
  # attribute names from NCES by just naming the rows with a sequence
  # of integers.
  results <- data.frame(local,state,row.names=seq(1,15,1))
  # Naming my two attributes
  names(results) <- c('local','state')
  # Generating the year attribute
  results[['year']] <- seq(1995, 2009, 1)
  # This command is a bit funky, but basically it makes my data as long as
  # possible so that each line has an ID (year in this case) and one value
  # (the dollars in this case). I also have a label that describes that value,
  # which is local or state.
  results <- melt(results, id.vars='year')
  # Returning my "results" object
  results
}

## Create the Woonsocket data-- note that R is case sensitive so I must use all
## capitals to match the NCES convention.
woonsocket <- create_ts('WOONSOCKET')
pawtucket <- create_ts('PAWTUCKET')
providence <- create_ts('PROVIDENCE')
westwarwick <- create_ts('WEST WARWICK')
state <- create_ts('STATE')

## Developing a plot of JUST local revenues for the selected communities
## First I create a percentage change data frame. I think that looking at
## percent change overtime is generally more fair. While the nominal dollar
## changes are revealing, my analysis is drawing attention to the trend rather
## than the initial values.

## First, I pull out just the local dollars.
perwoonlocal <- subset(woonsocket,variable=='local')
## Now I modify the value to be divided by the starting value - 100%
perwoonlocal[['value']] <- with(perwoonlocal, (value/value[1])-1)
## A little renaming for the combining step later
names(perwoonlocal) <-c('year','disname','value')
perwoonlocal[['disname']]<-'Woonsocket'

## I repeat this procedure for all the districts of interest.
perpawlocal <- subset(pawtucket,variable=='local')
perpawlocal[['value']] <- with(perpawlocal, (value/value[1])-1)
names(perpawlocal) <-c('year','disname','value')
awlocal[['disname']]<-'Pawtucket'

perprolocal <- subset(providence,variable=='local')
perprolocal[['value']] <- with(perprolocal, (value/value[1])-1)
names(perprolocal) <-c('year','disname','value')
perprolocal[['disname']]<-'Providence'

perwwlocal <- subset(westwarwick, variable=='local')
perwwlocal[['value']] <- with(perwwlocal, (value/value[1])-1)
names(perwwlocal) <-c('year','disname','value')
perwwlocal[['disname']]<-'West Warwick'

perrilocal <- subset(state,variable=='local')
perrilocal[['value']] <- with(perrilocal, (value/value[1])-1)
names(perrilocal) <-c('year','disname','value')
perrilocal[['disname']]<-'State Average'

## The same process can be used for state data
perwoonstate <- subset(woonsocket,variable=='state')
## Now I modify the value to be divided by the starting value - 100%
perwoonstate[['value']] <- with(perwoonstate, (value/value[1])-1)
## A little renaming for the combining step later
names(perwoonstate) <-c('year','disname','value')
perwoonstate[['disname']]<-'Woonsocket'

## I repeat this procedure for all the districts of interest.
perpawstate <- subset(pawtucket,variable=='state')
perpawstate[['value']] <- with(perpawstate, (value/value[1])-1)
names(perpawstate) <-c('year','disname','value')
perpawstate[['disname']]<-'Pawtucket'

perprostate <- subset(providence,variable=='state')
perprostate[['value']] <- with(perprostate, (value/value[1])-1)
names(perprostate) <-c('year','disname','value')
perprostate[['disname']]<-'Providence'

perwwstate <- subset(westwarwick, variable=='state')
perwwstate[['value']] <- with(perwwstate, (value/value[1])-1)
names(perwwstate) <-c('year','disname','value')
perwwstate[['disname']]<-'West Warwick'

perristate <- subset(state,variable=='state')
perristate[['value']] <- with(perristate, (value/value[1])-1)
names(perristate) <-c('year','disname','value')
perristate[['disname']]<-'State Average'

## Pull together the data sets for the overall picture.
localfunding <- rbind(perwoonlocal, perpawlocal,perprolocal,perwwlocal,perrilocal)
statefunding <- rbind(perwoonstate, perpawstate,perprostate,perwwstate,perristate)

## A little ggplot2 line plot magic...
localperplot <- ggplot(localfunding,aes(year, value, color=disname)) +
                geom_line() +
                geom_text(data=subset(localfunding, year==2009),
                          mapping=aes(year,value,
                                      label=paste(100*round(value,3),'%',sep='')),
                          vjust=-.4) +
                scale_y_continuous('Percent Change from FY1995',
                                   label=percent) +
                scale_x_continuous('Year') +
                opts(title='Percent Change in Local Per Pupil Revenue, FY1995-    FY2009') +
                opts(plot.title=theme_text(size=16,face='bold')) +
                opts(legend.title=theme_blank()) +
                opts(legend.position=c(.08,.82))
stateperplot <- ggplot(statefunding,aes(year, value, color=disname)) +
                geom_line() +
                geom_text(data=subset(statefunding, year==2008 | year==2009),
                          mapping=aes(year,value,
                          label=paste(100*round(value,3),'%',sep='')),
                          vjust=-.4) +
                scale_y_continuous('Percent Change from FY1995',
                                   label=percent) +
                scale_x_continuous('Year') +
                opts(title='Percent Change in State Per Pupil Revenue, FY1995-    FY2009') +
                opts(plot.title=theme_text(size=16,face='bold')) +
                opts(legend.title=theme_blank()) +
                opts(legend.position=c(.08,.82))
ggsave('localperplot.png',localperplot,width=10,height=8,units='in',dpi=72)
ggsave('stateperplot.png',stateperplot,width=10,height=8,units='in',dpi=72)
    
## Proportion of Aid
proportion <- function(data){
  # This reshapes the data so that there is a year, local, and state column.
  # The mean function has no purpose, because this data is unique by year
  # variable combinations.
  prop <- dcast(data,year~variable,mean)
  # Adding local and state get our total non-federal dollars
  prop[['total']] <- apply(prop[,2:3],1,sum)
  prop[['perlocal']] <- with(prop, local/total)
  prop
}


## Prepare new data frames for proportion graphs

propwoon <- as.data.frame(c(disname='Woonsocket',
                            proportion(woonsocket)))
proppaw <- as.data.frame(c(disname='Pawtucket',
                           proportion(pawtucket)))
propprov <- as.data.frame(c(disname='Providence',
                            proportion(providence)))
propww <- as.data.frame(c(disname='West Warwick',
                          proportion(westwarwick)))
propri <- as.data.frame(c(disname='State Average',
                          proportion(state)))

## Note, I could have called proportion() inside of the rbind(), but I wanted
## my code to be clearer and felt there may be some use for the independent
## proportion data frames in further analysis. Sometimes more lines of code
## and more objects is easier to maintain and more flexible for exploratory,
## non-production code. This is especially true when handling such small
## data sets that there is no impact on performance.

locprop <- rbind(propwoon, proppaw,propprov,propww,propri)

## Some ggplot2 magic time!

localpropplot <- ggplot(locprop,aes(year, perlocal, color=disname)) +
  geom_line() +
  geom_text(data=subset(locprop, year==1995 | year==2008 |     year==2009),
            mapping=aes(year,perlocal,
                        label=paste(100*round(perlocal,3),'%',sep='')),
            vjust=-.4) +
  scale_y_continuous('Percent Change from FY1995',
                     label=percent) +
  scale_x_continuous('Year') +
  opts(title='Percent Change in Local Proportion of Per Pupil    Revenue\n Excluding Federal Funding, FY1995-FY2009') +
  opts(plot.title=theme_text(size=16,face='bold')) +
  opts(legend.title=theme_blank()) +
  opts(legend.position=c(.9,.65))
ggsave('localpropplot.png',localpropplot,width=10,height=8,units='in',dpi=72)

This post is the third of a three-part follow up on my guest post for Nesi’s Notes. Parts I and II can be found here.

July 9, 2012

Update

See below for more information now that Ethan Brown has weighed in with some great code.

A recent post I came across on r-bloggers asked for input on visualizing ranked Likert-scale data.

I happen to be working on a substantial project using very similarly structured data so I thought I would share some code. In my efforts to be generic as possible, I decided to generate some fake data from scratch. As I peeled away the layers of context-specific aspects of my nearing-production level code, I ran into all kinds of trouble. So I apologize for the somewhat sloppy and unfinished code1.

Rankedlikert

My preferred method for visualizing Likert-scale data from surveys is using net stacked distribution graphs. There are two major benefits of these kinds of graphs. First, they immediately draw attention to how strongly respondents feel about a question, particularly when multiple questions are visualized at once. The total width of any bar is equal to the total number of responded who had a non-neutral answer. Second, these graphs make it very easy to distinguish between positive and negative responses. In some cases, it is critical to view the distribution of data to visualize the differences in responses to one question or another. However, most of the time it is informative enough to simply know how positive or negative responses are. I find this is particularly true with 3, 4, and 5-point Likert scales, the most common I come across in education research.

Anyway, without further ado, some starter code for producing net stacked distribution graphs.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
require(ggplot2)
require(scales)
require(plyr)
dataSet <- data.frame(
  q1=as.ordered(round(runif(1500, 1, 15) + runif(1500,1,15))),
  q2=as.ordered(round(runif(1500, 1, 15) + runif(1500,1,15))))
dataSet[['q1']] <- with(dataSet, ifelse(q1<7,1,
                                       ifelse(q1>=7 & q1<13,2,
                                             ifelse(q1>=13 & q1<20,3,
                                                   ifelse(q1>=20 & q1<26,4,5)))))
dataSet[['q2']] <- with(dataSet, ifelse(q2<3,1,
                                       ifelse(q2>=3 & q2<14,2,
                                             ifelse(q2>=14 & q2<26,3,
                                                   ifelse(q2>=26 & q2<28,4,5)))))
dataSet[['q1']] <- as.ordered(dataSet[['q1']])
dataSet[['q2']] <- as.ordered(dataSet[['q2']])
levels(dataSet[['q1']]) <- c('Strongly Disagree',
                             'Disagree',
                             'Neither Agree or Disagree',
                             'Agree',
                             'Strongly Agree')
levels(dataSet[['q2']]) <- c('Strongly Disagree',
                             'Disagree',
                             'Neither Agree or Disagree',
                             'Agree',
                             'Strongly Agree')
# Convert the integer levels to have meaning.
q1Proportions <- data.frame(Name='q1', prop.table(table(dataSet[['q1']])))
q2Proportions <- data.frame(Name='q2', prop.table(table(dataSet[['q2']])))
# Produces a data frame with the proportions of respondents in each level.

# ggplot2 function for graphs
visualize <- function(data,
                      responses=c('Strongly Disagree',
                                  'Disagree',
                                  'Neither Agree or Disagree',
                                  'Agree',
                                  'Strongly Agree'),
                      desc='Title',
                      rm.neutral=TRUE){
  # This function will create net stacked distribution graphs. These are
  # a particularly useful visualization of Likert data when there is a neutral
  # option available and/or when emphasizing the difference between positive and
  # negative responses is a goal.
  # Args:
  #   data: This is a dataframe with percentages labeled with responses.
  #   responses: This is a vector with the response labels.
  #   desc: This is the title of the output ggplot2 graphic, typically the
  #         question text.
  #   rm.neutral: This is a single element logical vector that determines if the
  #               neutral response should be removed from the data. The default
  #               value is TRUE.
  for(i in 1:ceiling(length(responses)/2)-1){
      # This loop negates all the negative, non-neutral responses regardless of
      # the number of possible responses. This will center the non-neutral
      # responses around 0.
      data[i,3] <- -data[i,3]
  }
  if(rm.neutral==T){
    data <- ddply(data,.(Name), function(x) x[-(ceiling(length(responses)/2)),])
    responses <- responses[-(ceiling(length(responses)/2))]
  }
  else{

  }
  print(data)
  stackedchart <- ggplot() +
                  layer(data=data[1:2,],
                        mapping=aes(Name,Freq,fill=Var1,order=-as.numeric(Var1)),
                        geom='bar',
                        position='stack',
                        stat='identity')
  stackedchart <- stackedchart +
                  layer(data=data[3:4,],
                        mapping=aes(Name,Freq,fill=Var1,order=Var1),
                        geom='bar',
                        position='stack',
                        stat='identity')
  stackedchart <- stackedchart +
                  geom_hline(yintercept=0) +
                  opts(legend.title=theme_blank()) +
                  opts(axis.title.x=theme_blank()) +
                  opts(axis.title.y=theme_blank()) +
                  opts(title=desc) +
                  scale_y_continuous(labels=percent,
                                     limits=c(-1,1),
                                     breaks=seq(-1,1,.2)) +
                  scale_fill_manual(limits=responses,
                                    values=c('#AA1111',
                                             '#BB6666',
                                             '#66BB66',
                                             '#11AA11')) +
                  coord_flip()
  stackedchart
}

And the results of all that?

UPDATE:

So now that Ethan has weighed in with his code I thought I would add some things to make this post better reflect my production code. Below, I have included my comment on his blog as well as an actual copy of my current production code (which definitely is not sufficiently refactored for easy use across multiple projects). Again, excuse what I consider to be incomplete work on my part. I do intend on refactoring this code and eventually including it in my broader set of custom functions available across all of my projects. I suspect along that path that I will be “stealing” some of Ethan’s ideas.

Comment

Hi Ethan! Super excited to see this post. This is exactly why I put up my code– so others could run with it. There are a few things that you do here that I actually already had implemented into my code and removed in an attempt to be more neutral to scale that I really like.

For starters, in my actual production code I also separate out the positive and negative responses. In my code, I have a parameter called scaleName that allows me to switch between all of the scales that are available in my survey data. This includes Strongly Disagree to Strongly Agree (scaleName=='sdsa'), Never -> Always (scaleName=='sdsa') and even simple yes/no (scaleName=='ny'). This is not ideal because it does require 1) knowing all possible scales and including some work in the function to treat them differently 2) including an additional parameter. However, because I use this work to analyze just a few surveys, the upfront work of including this as a parameter has made this very flexible in dealing with multiple scales. As a result, I do not need to require that the columns are ordered in any particular way, just that the titles match existing scales. So I have a long set of if elseif statements that look something like this:

1
2
3
4
5
 if(scaleName=='sdsa'){
 scale <- c('Strongly Disagree','Disagree','Agree','Strongly Agree')
 pos <- c('Agree','Strongly Agree')
 neg <- c('Strongly Disagree','Disagree')
 }

This is actually really helpful for producing negative values and including some scales in my function which do not have values that are negative (so that it can be used for general stacked charts instead of just net-stacked):

1
2
if(length(neg)>0){
quest[,names(quest) %in% c(neg)] <- -(quest[,names(quest) %in% c(neg)])

(Recall that quest is what I call the dataframe and is equivalent to x in your code)

Another neat trick that I have instituted is having dynamic x-axis limits rather than always going from -100 to 100. I generally like to keep my scales representing the full logical range of data (0 - 100 for percentages, etc) so I might consider this a manipulation. However, after getting many charts with stubby centers, I found I was not really seeing sufficient variation by sticking to my -100 to 100 setup. So I added this:

1
2
3
4
5
6
pos_lims =0)+1))[1,]),
sum(subset(quest,select=c(which(quest[2,-1]>=0)+1))[2,])))
neg_lims <- max(abs(c(sum(subset(quest, 
                                 select=c(which(quest[1,-1]<=0) + 1))[1,]),
sum(subset(quest,select=c(which(quest[2,-1]<=0)+1))[2,]))))
x_axis_lims <- max(pos_lims,neg_lims)

Which helps to determine the value furthest from 0 in either direction across the data frame (I have to admit, this code looks a bit like magic reading it back. My comments actually are quite helpful:

1
2
3
4
# pos_lims and neg_lims subset each row of the data based on sign, then
# sums the values that remain (gettting the total positive or negative
# percentage for each row). Then, the max of the rows is saved as a candidate
# for the magnitude of the axis.

To make this more generalizable (my production code always compares two bars at once) , it would be fairly trivial to loop over all the rows (or use the apply functions which I’m still trying to get a hang of).

I then pad the x_limits value by some percent inside the limits attribute.

In my production code I also have the scale_fill_manual attribute added separately to the ggplot object. However, rather than add this after the fact like at the point of rendering, I include this in my function again set by scaleName. However, I think the best organization is probably to have a separate function that makes it easy to select the color scheme you want and apply it so that your final call could be something like colorNetStacked(net_stacked(x), 'blues').

My actual final return looks like this:

1
2
3
return(stackedchart + 
       scale_fill_manual(limits=scale, values=colors) + 
       coord_flip())

Where colors is set by a line like: colors <- brewer.pal(name='Blues',n=7)[3:7]

Seriously though, I am super excited you found my post and thought it was useful and improved what I presented!

Current production code:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
visualize <- function(quest,scaleName='sdsa',desc){
  # Produces the main net-stacked Likert graphic used for survey data in the
  # diagnostic tool
  # Args:
  #  quest: data.frame from pull() or pullByLevel() output
  #  scaleName: string code for the type of scale that is used for the question.
  #  desc: string for the title that will be displayed on the graphic.
  # Returns:
  # Net-Stacked Likert chart with a bar/row for each row in quest. Most scales
  # center around 0 with a distinct positive and negative set of responses.
  # The graphs are custom colored based on what best reflects the scale.
  # The x-axis limits are set dynamically based on a 10% buffer of the largest
  # magnitude to either the positive or negative responses.

  if(scaleName=='sdsa'){
    scale <- c('Strongly Disagree','Disagree','Agree','Strongly Agree')
    pos   <- c('Agree','Strongly Agree')
    neg   <- c('Strongly Disagree','Disagree')
  }else if(scaleName=='da'){
    scale <- c('Disagree','Agree')
    pos   <- c('Agree')
    neg   <- c('Disagree')
  }else if(scaleName=='neal'){
    scale <- c('Never','Sometimes','Usually','Always')
    pos   <- c('Usually','Always')
    neg   <- c('Never','Sometimes')
  }else if(scaleName=='noalot'){
    scale <- c('None','A Little','Some','A Lot')
    pos   <- c('None','A Little','Some','A Lot')
    neg   <- c()
  }else if(scaleName=='noall'){
    scale <- c('None of them','Some','Most','All of them')
    pos   <- c('None of them','Some','Most','All of them')
    neg   <- c()
  }else if(scaleName=='neda'){
    scale <- c('Never','A Few Times a Year','Monthly','Weekly','Daily')
    pos   <- c('Never','A Few Times a Year','Monthly','Weekly','Daily')
    neg   <- c()
  }else if(scaleName=='ny'){
    scale <- c('No','Yes')
    pos   <- c('Yes')
    neg   <- c('No')
  }else{
    print('Unrecognized Scale Name')
  }
  # Remove neutral and non-response based values in the pull tables like
  # n-size, Not Applicable, etc.
  quest <- quest[,!names(quest) %in%
    c('n','Not Applicable',"I don't know")]

  # Produce values less than 0 for negative responses
  if(length(neg)>0){
  quest[,names(quest) %in% c(neg)] <-
    -(quest[,names(quest) %in% c(neg)])
  # pos_lims and neg_lims subset each row of the data based on sign, then
  # sums the values that remain (gettting the total positive or negative
  # percentage for each row). Then, the max of the rows is saved as a candidate
  # for the magnitude of the axis.
  pos_lims <- max(c(sum(subset(quest,select=c(which(quest[1,-1]>=0)+1))[1,]),
                    sum(subset(quest,select=c(which(quest[2,-1]>=0)+1))[2,])))
  neg_lims <- max(abs(c(sum(subset(quest,select=c(which(quest[1,-1]<=0)+1))[1,]),
                        sum(subset(quest,select=c(which(quest[2,-1]<=0)+1))[2,]))))

  # The actual magnitude of the axis is the largest magnitude listed in pos_lims
  # or neg_lims, and will be inflated by .1 in each direction in the scale later
  x_axis_lims <- max(pos_lims,neg_lims)
  }else{

  }
  # Reshape the data so that each row has one value with a variable label.
  quest <- melt(quest,id.vars='Var1')

  # Factoring and ordering the response label ensures they are listed in the
  # proper order in the legend and on the stacked chart, i.e. strongly disagree
  # is furthest left and strongly agree is furthest right.
  quest[['variable']] <- factor(quest[['variable']],
                                levels=scale,
                                ordered=TRUE)

  # Build the plot using ggplot(). Layers are used so that positive and negative
  # can be drawn separately. This is important because the order of the negative
  # values needs to be switched.

  ##### Control flow required to change the behavior for the questions that
  ##### business requirements call for 0-100 scale with no indication of
  ##### positive or negative, i.e. the neda, noalot, and noall scaleName.
  stackedchart <- ggplot() +
    layer(data=subset(quest,
                      variable %in% pos),
          mapping=aes(Var1,
                      value,
                      fill=factor(variable)),
          geom='bar',
          stat='identity',
          position='stack') +
    geom_hline(yintercept=0) +
    opts(legend.title=theme_blank()) +
    opts(axis.title.x=theme_blank()) +
    opts(axis.title.y=theme_blank()) +
    opts(title=desc)
  if(length(neg)>0){
    stackedchart <- stackedchart +
      layer(data=subset(quest,
                        variable %in% neg),
            mapping=aes(Var1,
                        value,
                        fill=factor(variable),
                        order=-as.numeric(variable)),
            geom='bar',
            stat='identity',
            position='stack')
  }else{

  }
  if(scaleName %in% c('sdsa','neal')){
    colors <- c('#AA1111','#BB6666','#66BB66','#11AA11')
    stackedchart <-  stackedchart +
      scale_y_continuous(labels=percent,
                         limits=c(-x_axis_lims-.1, x_axis_lims+.1),
                         breaks=seq(-round(x_axis_lims,1)-.1,
                                    round(x_axis_lims,1)+.1,
                                    .2))
  }else if(scaleName %in% c('ny','da')){
    colors <- c('#BB6666','#66BB66')
    stackedchart <-  stackedchart +
      scale_y_continuous(labels=percent,
                         limits=c(-x_axis_lims-.1, x_axis_lims+.1),
                         breaks=seq(-round(x_axis_lims,1)-.1,
                                    round(x_axis_lims,1)+.1,
                                    .2))
  }else if(scaleName %in% c('noalot','noall')){
    colors <- brewer.pal(name='Blues',n=6)[3:6]
    stackedchart <-  stackedchart +
      scale_y_continuous(labels=percent,
                         limits=c(0,1.05),
                         breaks=seq(0,1,.1))
  }else if(scaleName %in% c('neda')){
    colors <- brewer.pal(name='Blues',n=7)[3:7]
    stackedchart <-  stackedchart +
      scale_y_continuous(labels=percent,
                         limits=c(0,1.05),
                         breaks=seq(0,1,.1))
  }else{
    print('Unrecognized scaleName')
  }
  return(stackedchart + scale_fill_manual(limits=scale,
                                          values=colors) +
                        coord_flip())
}

  1. Mainly, I would like to abstract this code further. I am only about halfway there to assuring that I can use Likert-scale data of any size. I also would like to take in more than one question simultaneously with the visualize function. The latter is already possible in my production code and is particularly high impact for these kinds of graphics ↩︎

This poignant post from Michael Goldstein ends with a few policy thoughts that largely support my previous post.

Goldstein’s second point is worth highlighting:

Anyway, in a small school, large-scale research isn’t the key determinant anyway. The team’s implementation is.

On the same day that Shanker Blog is assuring us that rigorous social science is worth it, Goldstein delivers researchers a healthy dose of humility. Rigorous research is all about doing the best we can to remove all the confounding explanatory factors that have an impact on our observed outcomes to isolate an intervention. But even in the most rigorous studies social scientists are often measuring the Average Treatment Effect.

How rarely do we truly encounter a completely average situation? The real impact in any particular school or organization can be dramatically different in magnitude and even direction because of all the pesky observed and unobserved confounding factors that researchers work so hard to be able to ignore.

So my advice? If you are down on the ground keep close to the research but keep closer to your intuition, provided you are ready, willing, and able to monitor, evaluate, and adjust.

July 8, 2012

How can we tell if principal directives are fair to teachers?

There has been a great conversation circling some blogs I read over the last week about liberty in the work place.1 Issues of fairness in the work place are a constant in today’s education conversation. Whether some view it as a form of metaphoric violence on teachers and their profession, while others see a concerted effort to change rigid, bureaucratic systems that prevent effective change2, at the heart of education reform du jour are changes to workplace freedom. Improving human capital systems has meant dismantling questionable licensing requirements3, dramatic changes in teacher evaluation, and other dramatic changes to who gets hired or fired. Using extended learning time, either through additional instructional days and/or longer school days, to increase student achievement is often considered too costly, because teachers demand more pay for more work. Additional professional development days are similarly costly; teachers are loath to give up additional days in the summer or during school vacations without receiving additional pay. I could go on.

All of these reforms seek to radically change terms and benefits in teacher contracts and state law that represent a string of hard-fought (and won) battles that teachers and their unions pursued for years. The political left, and more specifically the progressive movement, has generally picked up on these attempts as anti-union, anti-collective bargaining, anti-democratic, anti-teacher, and anti-education. There are even a host of conspiracy theories decrying the “corporate reformers” who are coming into the education realm to break down good, public, democratic systems that are good for Democrats, largely to hurt poor kids and make profits4.

Fundamentally, most of this argument is about what individuals ideologies have led them to believe about employee rights and employer rights. I find it increasingly frustrating that these conversations do not address the deeper philosophical differences. This is why I have really enjoyed observing the current conversation between Crooked Timber, Bleeding Heart Libertarians, and others.

One of the key aspects of the BRG argument5is that worker contracts are unique because many of the terms of employment are ambiguous. Employers should only be permitted to demand that employees partake in activities to which they have consented. The contract is supposedly a signal of this consent, however, because the terms are so often ambiguous, disputes over whether or not it covers an activity are practically a guarantee. So how should these disputes be settled and by whom? BRG would argue that there should be strong worker freedom to make sure that their consent is truly given. They consider the relationship between employer and employee to be naturally coercive, at least in part because they assume the right to end employment has very asymmetric benefits since employees have, presumably, much more to lose than employers when the contract ends. BRG assumes that freedom is best served through a democratic workplace with very powerful employees who have few, if any of their rights restricted in the workplace. On the other hand, BHLers believe that it is possible to consent to restrict ones rights within a contractual relationship, they do not tend to accept that the right to exit affords highly asymmetric freedoms6, and they feel that freedom is maximized by abstaining from limiting private contracts while maximizing the rights to freely enter and exit contracts.

However, it is macroeconomist Miles Kimball, a recent entrant into the blogosphere, whose comments I felt could most directly be applied to education. If I were to summarize his post, it would be:

  1. There are significant pressures against to eliminating freedoms of your workers that end up making them worse at their jobs or lead to attracting bad talent.
  2. Although these pressures exist for “The Firm”, it is true that “underbosses” with significant power can act in ways that maximize their personal gain instead of what’s good for “The Firm” and the pressures are less strong against eliminating freedoms for them than the organization as a whole.
  3. Nevertheless, they should have the rights to limit/remove freedoms, and these limitations should be based on whether they are relevant to achieving the organizations pre-stated mission.
  4. Ultimately, the right outside force to judge whether this was a proper imposition to make on employees should be people who have successfully navigated the same challenges as The Firm but have no direct interest in The Firms current activities.

Each of these four points, if they are accepted as true, has some interesting applications to education. My translation for education colleagues would be:

  1. Districts and states have little reason to make lives shitty for teachers. 7

  2. But some principals, department heads, and others may have the ability to act in ways that are less than proper. 8

  3. Actions that restrict teacher rights should be judged on whether they help the school achieve the district or school’s pre-stated mission.

  4. Disputes between teachers and their bosses, should not be adjudicated by a typical jury or judge. Instead, the actions of the principal should be judged by other principals who have been successful, preferably with some distance from the actual organization (i.e. not principals who might compete for the offending principals job or may want to hire or be stuck with that teacher based on the proceedings).

I think that points 1 and 2 are fairly obvious. Points 3 and 4, however, are far more interesting.

Kimball is attempting to split the difference a fascinating way. I believe he would accept that employment contracts are, by necessity, “ambiguous” in the way that BRG defines that term. His argument is, therefore, that the mission and purpose of the organization should notbe ambiguous. So long as the organization’s mission is clear, an employment contract becomes consent to do whatever has a rational basis for furthering those goals. In this way, there is an ethical standard by which we can judge new situations that could never have been anticipated directly at the contracting stage. For example, it may be perfectly reasonable for a principal to require a teacher to spend lunch in the student cafeteria so long as their is a rational basis for believing this would further the mission of the school.

In highly unionized workplaces, work rules are so specific that they remove a substantial portion of the ambiguity in contracting. This is generally seen by the left, union members, and other BRG-like thinkers as a huge victory. It implies full consent to the terms of employment and substantial restriction of an employer’s ability to abuse their position and abridge the freedoms of their employees in unethical ways. Schools are generally like this. Practically everything is spelled out about a teacher’s position, often to the minute. How long they get to eat lunch, how much unstructured time they get during the day, how long they have to spend time working with other teachers, how long they are allowed to be placed in front of kids, how many kids can be placed in front of a teacher at any given time, these conditions and more are detailed in teacher contracts.

In my experience, when I ask a union supporter why they think unions are good, they almost always point out “abuses” of employers that occurred often before the union wrestled power from the grips of the few and the privileged back to the laborers. I have to wonder how much of their support comes from a lack of common, clear definition of unethical abridgments of freedom in the workplace. The solution to this ambiguity is requiring that all actions be consented to through negotiation and contracting, which also determines that dispute resolution is a matter of contract law. I have to wonder if both workers and their employers would be better off if there was a universal ethical standard like Kimball proposes. This way consent can be given while allowing more ambiguity in the contract itself. Right now employers fight for this ambiguity depending solely on appeals to trust and cooperation, two things that are rarely earned before working with someone as would be required.

I can’t say that I understand labor dispute resolution well enough to comment on the differences between Kimball’s fourth suggestion and current practice. However, it is pretty clear to me that enforcement through contract law is costly and inefficient, regardless whether it is effective in adjudicating disputes in an ethical matter. Labor relations boards, as far as I can tell, seem to be a political tool swaying between dramatically increasing worker power, especially when members are current or former full-time employees and members of a union, and increasing employer power when more corporate representation is assured. If only I believed it were possible to have an apolitical, disinterested board, with sector-specific expertise, determine whether there is a “rational basis” for employer actions that Kimball envisions.

I am left with more questions than answers, but, for me, there is a rich appeal to utilizing the mission of an organization to determine whether the actions of both it and its employees are just.


  1. Here is the (socialist?) critique of libertarians and right to work that sparked the discussion. Then two economists jumped in. The response from Bleeding Heart Libertarians, meanwhile, continues to pour in rapidly↩︎

  2. I lean toward the latter, even if I disagree sometimes both with the means and ends of the current reform movement. ↩︎

  3. I am skeptical about licensing in general. I’ve seen substantially more research to support experience than licensing requirements in education and/or additional education. Various alternative teacher pathways now exist. ↩︎

  4. Whereas I find some of the “anti-s” in the previous sentence worthy of discussion, I find the massive, corporate, right-wing conspiracy stuff to be 98% bullocks. ↩︎

  5. BRG= Bertram, Robin, and Gourevitch, authors of the Crooked Timber post. This acronym has been used in contrast with BHL, Bleeding Heart Libertarians, during this debate ↩︎

  6. or perhaps, if they do they feel that this would not be the case in a world that more generally matched BHLs principles ↩︎

  7. Here I assume that the proper size unit for “The Firm” is above the school. I think this generally holds because district boards and state policy makers tend to be more directly accountable to citizens than schools. The relationship here is much more like shareholders and/or customers to business than the citizen to school relationship. ↩︎

  8. Here, the school-level administration represents the “underbosses”. Given that much of the debate over teacher evaluation and rules on hiring and firing stem from debates about both principal quality and principal power, I think this is the right assignment. ↩︎

June 12, 2012

Tonight I started Coursera.org’s Algorithms: Design and Analysis Part I. This class should pick up right about where I left off my computer science education. I took CS15 as a sophomore in college but didn’t have the time to take CS16: Introduction to Algorithms and Data Structures. So, while it’s been almost 6 years since I have formally taken a computer science class, it is time to continue my education.

I plan to write about once a week about my experience. This will serve both as an opportunity to work out ideas spurred by the course as well as a review of the growing area of free, online courses that started way back in 2002 with MIT’s OpenCourseWare and continues today with upshots Udacity and Coursera, among other players. Given the emphasis being placed on the potential for technology as disruptive to classroom teaching over the last 50 years, the topic seems worthy of some experiential learning by a budding young education researcher/wonk.

Introduction and About the Course

The Introduction video was a bit scary. Although the content was simple, Professor Tim Roughgarden is a fast talker and he does seem to skip some of the small steps that really trip me up when learning math from lectures. For example, in discussing the first recursive method to $n$-digit multiplication, Professor Roughgarden suddenly throws in a $10^n$ and $10^{n/2}$ term that I just couldn’t trace. I kept watching the video waiting for an explanation and pondering it in my mind when a few minutes later it hit me; the two terms kept the place information lost when a number is split into its constituent digits 1.

The About this Course video, however, provided some good advice I intend on following; although there will be no code written as a part of this course to be language neutral, I will be attempting to code each of the described algorithms on my own. Professor Roughgarden’s assumption is that this is within the skills of students taking this class. Generally, I believe I am capable of achieving this in at least some language. Currently, I prefer to use R. This is not because R is best suited to this kind of work. Rather, it is because I am relatively new to R, and I think that learning to program some fundamental computational tasks will be good for learning the ins and outs of the language.

However, I think I may switch over to using Python later in the course. Why? Because I feel like learning Python and Udacity happens to have a course up already to do just that. My hope is to incorporate free online learning into my routine just like I include reading dead-tree books, Google Reader, and mess around on Twitter. So while I can’t swear that I’ll actually start moving through these two courses (and two more I’m interested in starting June 25), I feel having complimentary, simultaneous course work will push me. Each class should reinforce the other and I should see the most benefit if I keep up with both.

Finally, this class is a big time commitment. The first week has 3.5 hours of lecture time allotted. A typical Brown class would meet for only about 2.5 hours a week (three 50 minute classes or two 80 minute classes). That means a lot of time, not including homework or spending time actually coding and implementing the introduced algorithms. Although some of this material is “optional” (about an hour), that’s still pretty intimidating for a free, online, spare time class. Make no mistake, if time commitment is any indicator, this will be every bit as challenging (to actually learn) as a real college course that last this many weeks.


  1. The algorithm called to split an $n$ digit number $x$ into two, $n/2$ digit numbers. What was unstated, but of course true, is that this transformation must result in an expression that was equal to $x$. Of course, $x=10^n * a+10^\frac{n}{2} * b$ , because the leading digit of $a$ must be in the $n^{th}$ place and the leading digit of $b$ must be in the $\frac{n}{2}^{th}$ place. Nothing about this is complex to me, but it was not obvious at the speed of conversation. I think working out an actual example of a 4-digit number multiplication, as Professor Roughgarden had with the “primitive” multiplication algorithm, would have made this far clearer. ↩︎

June 6, 2012

The removal of I-195 from the Jewelry District is supposed to help spur Providence’s second renaissance by providing ample green- and brown-field development sites for a whole host of biomedical companies apparently dying to move to a state and city in fiscal crisis whose current population does not have the required skills to serve as an employment base.

Seriously, I am quite optimistic about the once-in-a-generation to develop a massive part of what should be an integral part of Providence’s downtown core 1. Buildings will come, even if it is a slow, grueling process. Hopefully jobs will follow. But a key first step the incredibly busy 2 I-195 commission must take is elevating Dyer Street to a new hub of activity.

Why Dyer Street

This is a particularly important site for the redevelopment of Providence. One of the highest profile completed development projects in the Jewelry District has been Brown University’s Alpert Medical School at Ship Street and Eddy Street 3. Further down Eddy Street, we find one of the tragic failures of the Jewelry District, Narragansett Electric Lighting (Dynamo House), a hulking brick site left open to the elements that was set at one point to become a museum. One Davol Square, a popular site for entrepreneurs in Providence is found where Eddy meets Point Street.

Brown University has already purchased 200 Dyer Street, which sits to the north at the “start” of Dyer Street between Clinton and Dorrance. This site was recently renovated and is now home to Brown University’s Continuing Education, an adult education site primarily mid-career professionals and adults. Already 200 Dyer hosts forums intended for the Providence community and, along with expanding CE into so-called “executive master’s programs”, this site is likely to be a hub of substantial interaction between Brown and Providence residents.

It is easy to see that Eddy Street, from Ship Street to Point Street, is already an important hub of job-related activity in the Jewelry District. The very presence of an existing, huge, historic site between Alpert Medical School and a major center for startups makes it likely that this stretch could see real further development. And with Brown staking claim to the “mouth” of Dyer Street, the makings of a Brown University “West” campus 4 is coming into view.

Expanding Riverwalk Park into the space between Dorrance and Ship Street as planned should be the final piece to the Dyer Street puzzle 5.

It seems that turning Dyer Street into an “A” street filled with activity should be one of the easiest sells of all in the Jewelry District, given this is one of the few areas where actual purchases have taken place other than the land behind Johnson and Wales.

Luckily, as far as I can tell, Providence Planning’s vision for the repairing of the street grid in this area is right on the mark, because while the land adjacent to Dyer Street from Friendship Street to Ship Street is some of the most “shovel-ready” land in the Jewelry District, this stretch also represents some of the most obviously damaged by the highway 6. It is easily fixed. Dyer Street should be two ways all the way and not shift to a one-way at Peck Street. The remnants of an “on” ramp that serves as the northbound route connecting Ship Street and Peck Street should obviously be eliminated and subsumed in the expanded Riverwalk Park. An additional oddity left from another on ramp between Dorrance and Clifford Street should be removed, allowing the two-way Dyer to have a straighter path. Dyer should potentially be expanded to include bike lanes separated from traffic by trees on the eastern side.

Chapinero bike path

Create a street like this. Encourage development on the west led by Brown University connecting Alpert Medical School to Brown Continuing Education. Bring in creative commercial development forming a continuous street wall of jobs from One Davol Square to the new, expanded park. Attach the proposed Greenway through the Jewelry District and the planned pedestrian bridge to Fox Point. Do all of this, and Dyer Street will become one of the most vibrant places in Providence.


  1. I am not being sarcastic here, even if I’m generally dismissive and flippant about the wacky ideas that the “elite” in Providence and the state of Rhode Island have about this space ↩︎

  2. Okay, so here I’m being sarcastic ↩︎

  3. Dyer turns into Eddy past Ship ↩︎

  4. I don’t think they use this term ↩︎

  5. Although it appears the I-195 commission is getting cold feet on the expanse of this public space ↩︎

  6. It will likely help to look at this view while reading this next section ↩︎

April 24, 2012

I am a firm believer that some goods should be [public][]. I do not believe that my tax dollars are about providing direct personal benefit. I like redistributed tax policy. But it is hard to be a Rhode Islander, surrounded by government institutions [that][] are [failing][], and feel good about the taxes I pay. Corruption and [cronyism][] is a daily reality of government business. Some agencies have [tremendous waste and inefficiency][]. Worse, many public institutions that are failing their mission and wasting money are actually woefully underfunded. 1

If more government institutions functioned like the Downtown Improvement District, there would be greater trust and support for government services.

Some of the best money I spend each year is the approximately $200-250 that I send to the [Downtown Improvement District][] (DID).

I live within a special assessment district in Providence that levies an additional property tax to pay for the ladies and gentlemen in bright yellow jackets that are a constant presence in my neighborhood. For a small tax each year, my neighborhood gets:

  • substantial cleanup /sanitation services that removes the mountains of trash that can pile up during a busy night where [college students][], [theater][] [goers][], [restaurant patrons][], nightclub patrons, [tourists][], and [Waterfire][] visitors all converge on Downcity
  • excellent landscaping including planting and pruning trees, maintaining flowerbeds, hanging flower pots on light posts, etc.
  • responsive care of public property including removing safety hazards, e.g. removing the cement, waist-high barrier that had fallen on the sidewalk by my building that was a major safety hazard for pedestrians
  • easily identifiable public presence in addition to cops that increases the safety and security of busy city street
  • much, much more.

It may seem selfish, but honestly, this is the best government service I current receive. It is inexpensive. I am able to see a direct increase in my quality of life in Downcity. It clearly increases and protects my property investment. I get an annual budget that is fairly detailed mailed annually that explains precisely what my dollars purchased and how they will be used in the coming years.

When the currently dormant[^0] [Providence Core Connector][] announced they would seek to use a special assessment district to fund operation expenses, [I was all for it][]. Sure, a portion of my support came from the simple economics, but I would be lying if I did not admit that the wonderful relationship I have with the Downtown Improvement District was not a part of my consideration. The DID has provided an excellent model to Downcity residents demonstratingthe efficacy of using the greatest (but not sole) beneficiary of place-bound services as a revenue source. Does anyone really believe that Downtown would have doubled its residency from 2000-2010 2 if the DID were not around?

  [public]: http://en.wikipedia.org/wiki/Public_goods [that]: http://www.golocalprov.com/news/julia-steiny-woonsockets-nosedive-a-cautionary-tale/ [failing]: http://www.golocalprov.com/news/procap/ [cronyism]: http://www2.turnto10.com/news/2012/apr/06/former-uri-president-enters-sport-institute-scanda-ar-991425/ [tremendous waste and inefficiency]: http://www.ripec.org/pdfs/2009-AI-Study.pdf [Downtown Improvement District]: http://downtownprovidence.com/clean-safe/ [college students]: http://jwu.edu/providence/ [theater]: http://www.ppacri.org/ [goers]: http://www.trinityrep.com/ [restaurant patrons]: http://graciesprovidence.com/ [tourists]: http://www.hotelprovidence.com/ [Waterfire]: http://www.waterfire.org/ [let’s change that]: http://t.co/zmwvItBf [Providence Core Connector]: http://providencecoreconnector.com/ [I was all for it]: http://blog.jasonpbecker.com/2011/09/26/downcity-residents-should-support-the-core-connector-and-the-tax-makes-sense/


  1. See Pawtucket and Woonsocket on this chart ↩︎

  2. US Census Bureau. Check out this great resource that the Providence Planning Department put up ↩︎