Jason Becker
March 19, 2012

Have you ever tried to access public information about Providence on the web? Due to the recent, and new, requirement that residents reapply for their homestead tax exemption in Providence, I decided to poke around the Providence webpage to see what kind of public information on property was available online.

I was greeted with an IT nightmare1.

The system was clearly a third-party developed or purchased front end designed for searching public records on property and sold to municipalities throughout the country by winning contracts through the RFP process. It’s also clearly no longer supported (or supported poorly), outdated, and running on hardware that’s probably about as powerful as an 8-year-old desktop computer. The system crawls, when it’s not completely still, and provides no means of easy export that I can find.

This was really disappointing to me. In an earlier post, I began to look at some simple data available on the Providence Journal’s webpage about recent sales. I was hoping to use an API (and in the worst case, a massive data dump) to get access to information about the assessed value of recently sold properties and to play around a bit with various heat maps and see what patterns are revealed. Even if there were some way to get access to the data, the application is so poor I definitely do not have the patience required.

This is a real shame, because one of the great things about data on property is that it is all public information. This means that the data could be shared widely to creative policy wonks, data geeks, and CS nerds looking for a weekend project. There are now two cities2in the US that are employing a new kind of CIO– the Chief Innovation Officer– whose role is to connect government resources, be they employees, data, or infrastructure, with folks who can do something new, exciting, and useful for city residents.

Developers designed one application in San Francisco to reduce the “notoriously cumbersome hurdles for starting a new business.”3Does anyone happen to know of another city, perhaps without the rich, entrepreneurial technology and science economy it lusts for, that is not known as an easy place to set up shop? Hint.

When Brown University has an excellent computer science department made up of undergrads who work on side projects like developing an online course catalog since the Brown purchased system sucks, it’s hard not to conclude that there are latent geek talents just waiting to be tapped. And it’s not just Brown that could be engaged. What about Swipely, the newest tech startup from entrepreneur Angus Davis, a Rhode Island native who is very active in local government and developing Providence. An exciting. fledging technology company with young, smart folks who do application development with huge swaths of data every day is an excellent source of the kind of talent Providence needs. Why hasn’t someone approached Swipely about a charity opportunity– take all non-essential staff off their current projects, give them a break from the deadlines and typical day, and get them working furiously for a week on rolling out something awesome and useful for city residents. It would reinvigorate young coders and greatly benefit the city.

Heck, something as simple as getting in and teaching government employees how to spin up instances of Amazon EC2 4 so they can migrate off self-managed, slow servers for non-secure information would be huge. Imagine this, plus rolling an API for critical municipal data sets. There would be a rich environment for hobbyists, students, and professionals alike to spark some creative ways to understand and interact with Providence.

Another example, assuming it’s legal, how long do you think it would take a smart developer, dedicated solely to one task, to roll a system that would cross-reference Providence landowners with the DMV registration database so that myself and other residents didn’t have to trek out documentation to City Hall to avoid a 50% tax hike?

Providence is not San Francisco, but there are many talented developers and data scientists who are passionate enough about the city to donate their time and expertise. Honestly, for many of these folks there are real personal benefits, even without appealing to some sentiment of civic duty. So let’s open up our data, our infrastructure, and our employees. Let’s encourage folks from outside of government to inject excitement and new skills for current government IT employees and analysts.

It’s time for Providence’s own “Summer of Code”.


  1. To be fair, when I accessed the site today the site was substantially more functional than it was in two previous visits several weeks ago ↩︎

  2. San Francisco, unsurprisingly, and Philadelphia ↩︎

  3. From The Atlantic Cities article linked above. ↩︎

  4. FISMA compliant ↩︎

February 14, 2012

Social promotion, in education circles, refers to the practice of allowing students to move on to the next grade level or course even though they are unable to demonstrate they have mastered the skills and knowledge they were expected to learn. Ending or reducing social promotion has been a major theme in the standards-based education reform of the last 10-15 years. Ending social promotion feels like a sound, obvious consequence of standards-based education. Each year (or course) comes with set of standards that articulate what students must know and be able to do once complete. Since the standards of the following course will assume proficiency on previous standards, there is a fundamental common sense to prohibiting students to move to the next level before they have conquered all prior levels.

In reality, this is a gross oversimplification bordering on reductio ad absurdum. Allow me to throw a few wrinkles into the carrion calls for social promotion’s demise. First, some standards do not come packaged with lofty presumptions of prior knowledge or skills. For example, a student could be quite successful in a high school chemistry or physics course without being successful in biology. In fact many students take these three courses in a sequence which explicitly prevents taking advantage of the natural interrelatedness of these sciences1. For sure there are some skills that serve as critical gateways to future standards and expectations, but a student may fail a course while still having all of the core scaffolding in place for the next course level. Second, it is unclear that repeating a class (or entire grade-level) is an effective mechanism to successfully attain acceptable achievement. What proportion of content that will be repeated has a student already successfully learned without need for reinforcement? Are the strategies and pedagogy employed to teach students new material the same as those used to re-teach material? I’m doubtful. Then there are behavioral and social concerns. What are the impacts on a student’s self-esteem? What are the impacts, particularly in elementary schools, of mixing students at even greater age ranges? If students must relearn content, reread the same books, etc, what will happen to their level of engagement with the material? What is the impact of isolating students from their friends in a way that might feel like punishment?

There’s research on both sides of this issue– some that demonstrates that students who are held back do better academically2, and some that show outcomes are no better or worse. The impact on the socio-emotional side also seems mixed, although there is more consensus around negative consequences for being held back. That being said, much of the research I have read on social promotion looks at all students being held back in various contexts and not specifically examining long-term effects within a large-scale implementation that regularizes the process, which perhaps decreases the social stigma of being held back and increases the efficacy of teachers with held back students. I am having a hard time remembering at the moment, but I can’t recall a large-scale study that used regression discontinuity (and instrumental variable) like the previously linked Jay P. Greene Florida study that took a robust look at socio-economic outcomes. Less rigorous methodologies may introduce substantial bias to results3.

That being said, I generally think that social promotion is not ideal. In a perfect world, principals and district leaders would have a better sense of how well teachers are able to differentiate instruction in their classrooms more precisely. This way, they could adequately determine when a student is so far behind expectations that it is unreasonable to expect teachers to deliver the necessary instruction in a mixed-ability classroom. When a student is that far behind at the end of the school year, targeted summer intervention would attempt to bring a student to within an acceptable range by the start of the next school year. If this fails, then, and only then, should a student be held back.

None of this is new. In fact, Wikipedia led me to an article about social promotion archived on the US Department of Education webpage from May 1999 that is strikingly similar to everything I wrote above4.

Emily Richardson has a great post in The Atlantic 5about an intriguing alternative. David Berliner, an Arizona State University professor of education, says,

“Everybody supports the idea that if a student isn’t reading well in third grade that it’s a signal that the child needs help. If you hold them back, you’re going to spend roughly another $10,000 per child for an extra year of schooling. If you spread out that $10,000 over the fourth and fifth grades for extra tutoring, in the long run you’re going to get a better outcome.”

There has recently been an increase in evidence about the efficacy of intensive, very small group (like two-on-one) tutoring at raising academic achievement based on a “successful replication” of the MATCH Charter School’s tutoring program in the Houston Independent School District’s Apollo 20 Program6. Intensive one-on-one support has several key advantages that address some of my “wrinkles” about social promotion above. The instruction is specifically catered toward the exact standards that a student is weak on (and possibly on standards that are more foundational to future learning). The strategies and techniques used to teach a student may not be the same as whole-classroom, first-time exposure learning. The intervention strategy feels more like a surplus than deficit-driven policy, i.e. students are receiving more because of their achievement not being “punished”. And that is not remotely an exhaustive list.

The problem is that education funding is not structured to spend the way that Berliner is recommending. Revenues are often raised or doled out on a per pupil basis so holding a student for an extra year will virtually automatically result in additional formula-driven dollars7. There is no way to flag a student as needing the “5th-year high school” funding now, in the form of two years of intensive, one-on-one tutoring in elementary or middle school. I am not sure I think that’s a good thing, because I really do think that Berliner is on to something here. The cost-effectiveness of the total investment in any one student identified as being at risk of falling seriously behind is likely to be far higher providing a huge influx of resources to be used entirely on individualized intervention rather than offering an entire extra year of education overall and repeating a specific grade.

The logistics of supporting this kind of intervention with anything but local or private revenue is causing my brain to do mental gymnastics, but the complexity might really be worth the benefits.


  1. An aside for another day. ↩︎

  2. ending social promotion based on standardized assessments offers pretty much a textbook example of regression discontinuity studies, which I think is kind of cool ↩︎

  3. Although woefully outdated and using a source from 1986, the research section of the Wikipedia page on grade retention outlines this in effective and simple language ↩︎

  4. I did find the article after I wrote this post ↩︎

  5. And on her blog, The Educated Reporter ↩︎

  6. Of course there is lot of past research to support this idea, but I do think that Fryer’s Apollo 20 evaluation has reinvigorated discussion around the effectiveness of this type of intervention in the past several months. ↩︎

  7. In theory, at least. Of course many states are reducing their state aid formula in funky ways and there are loopholes in most maintenance of effort laws that are now being rigorously used to allow for per pupil decreases in local revenues ↩︎

February 8, 2012

I read literally hundreds of posts from RSS feeds every day. I use Google Reader as an aggregator, Reeder to actually read through my feeds, and Pinboard for social bookmarking and posting1.

In order to capture just a small slice of the stories I really enjoyed, I’ve decided to start a new feature called “Worth It”. I hope this will be between 3 and 5 stories each week with some quick commentary that I think are worth anyone’s time to read. This week I’ve thrown together five stories kind of haphazardly, but in the future I hope these posts will lean toward highlighting longer features or reports as opposed to more blog or typical article-length pieces.

Feel free to use the comment section to recommend some stories that were “Worth It” from the last week that I may have missed.

From school facing turnaround, a tale of academic perseverance

The first “Worth It” piece is a great Gotham Schools piece about a student who was nearly lost between the cracks in the New York City school system, doomed to a tough life by coincidence, mishap, and possibly negligence. Unlike most students faced with such abject systematic failure, Moustafa Elhanafi’s story has a happy ending. Although he found himself illiterate and with no prospects at 18, he is now set on a course to graduate with his high school diploma ready for college by the time he is 21. Elhanafi was born in the United States but lived in Egypt with his mother from age 2 until age 8. At 8 years old, he moved back to New York City and lived with his father in Queens. When he was 11, the NYC had so totally failed him that they misdiagnosed him with mental retardation. The article hints at several reasons this calamity of errors may have occurred. Elhanafi was an English language learner, which can challenge the typical screening methods that trained social workers, psychologists, etc have at their disposable. He is described as shy and at times, withdrawn. It’s quite possible that Elhanafi suffers from one or more learning disabilities and/or other unique psychosocial abnormalities, but it is also abundantly clear that being quarantined in programs designed for students with severe and profound special needs was no help. I strongly recommend you read this story and find out more about just what it takes to educate a student like Elhanafi. Without giving too much away, I’ll just say that this is an uplifting story that shows how much compassion and dedication, from teachers, parents, and students can accomplish.

Why Pay for Intro Textbooks?

Rice University is admirably seeking to tackle the textbook publishing industry the right way2. Producing free, open source, traditional, peer-reviewed textbooks for the post-secondary market is well worth the investment. The internet may have democratized content creation, search may have increased the relevance of the sea of materials, and social media may have helped to curate quality out of the still massive relevant web. None of these are a substitute for true expertise subjected to a robust revision and editorial process on the road to peer expert approval. Wikipedia is one of the few corners of the web to get quality right, but the mental model users have when in an encyclopedia is perusal; there is no way to clearly stake out a path through Wikipedia to thoroughly learn a set body of knowledge. Textbooks offer an organizational framework that brings clarity, context, and connectivity to the information.

Charter Advocates Claim Rules in Works Would Affect Pensions

This is a real wonky one. Essentially the IRS is tackling with how we define a government employee. The way some proposed rules are now written, it’s possible that charter school teachers will not be considered “government employees”. As a result, their inclusion in government pension systems can jeopardize several special protections because the systems will no longer be considered public. Virtually all states allow charter school teachers to participate in state plans, and a few, including Rhode Island, require that all charter school teachers take part in the state operated teacher pension system. There are several excellent reasons for this policy, even if it costs charter schools more money than they might like. First, because pension benefits are a major form of compensation for teachers and they accrue with experience, participation in a state pension system serves to immobilize the teacher labor force. In fact, most states centrally operate their pension systems specifically to allow teachers to move across schools and districts without sacrificing their pensions. This is desirable if we want more efficient labor market sorting since optimal sorting requires minimal (and preferably negligent) transaction costs. Charter schools want the option to draw from current public school teachers and their ability to do so is greatly limited if benefits that have accrued over the course of a career are lost or severely diminished due to transitioning into a charter school.

I am not at all sympathetic to the notion that charter schools are not public schools. Although we might debate the extent to which they are democratic3, they are clearly public entities. They supply a public good entirely through taxpayer dollars with almost all the financial accountability (and sometimes more) requirements of traditional public schools. All federal public education laws and regulations apply to these schools as do the majority of state law and regulation (in most instances). Charter schools are public schools. But because charter school employees are technically directly accountable to a board that is typically not democratically elected, it is apparently debatable whether or not they are government employees. This seems odd to me. I work for the Rhode Island Department of Education and I am clearly considered a state employee. Yet my employer is a Commissioner of Education who is hired by the Board of Regents. The Board of Regents is an appointed body, not democratically elected. So while they may, in so ways, be directly accountable to elected officials, it’s a long way off to find direct democratic accountability for my position. In many ways, charter school employees have far fewer layers between them and the public, yet my government employee status would never come into question.


  1. I use IFTTT triggers based on Pinboard tags to post to Twitter, Tumblr, Facebook, etc ↩︎

  2. Apple’s iBooks Author is awful in comparison. Bringing what are essentially web page authoring tools to the masses and wrapping the materials in a proprietary shell is just awful on so many levels. This strategy just shifts the cost from individual books to the devices the books need to run on. These devices have a lifespan that’s shorter than a typical textbook and are very expensive. ↩︎

  3. using Sarah Goldrick-Rab’s recently offered definition, which described democratic to me as the extent to which stakeholders directly participate in institutional governance and decision-making ↩︎

January 30, 2012
111 Westminster lit at night by Flickr user kehuston

I was pretty disappointed, but not surprised, that Bank of America has chosen to leave 111 Westminster Street. The building is an iconic anchor to downtown Providence. Unfortunately, this space has not been properly refurbished to more modern standards. The entire building has a single, antiquated utilities system– heating, cooling, electricity, etc are all setup for a single tenant. Weighing in at 350,000 square feet, there is simply no one in Providence who needs an old, out of date workspace of that size. Renovations, at this point, are likely to be very expensive, although environmental advocates and businessmen alike should unite around refurbishing over new construction.

Downtown Providence 1 has no shortage of vacant space. Saving 111 Westminster is going to take creative thinking and substantial investment. Fortunately, there is no better time since the Arcade is undergoing some exciting changes. 2

Because of the near perfect alignment of 111 Westminster and the Arcade, I’d love to see some true, deep collaboration that rethinks the area on Westminster Street between Exchange and Dorrance Streets 3. Activating this space at the street level will be a huge boon to the Arcade, 111 Westminster, and the success of Downtown as a whole. I’ve got a handful of ideas, but I lack the skills to make beautiful pictures to make my visions tangible.

So I’m going to just have to hope that Greater City: Providence recognizes this as the opportunity for a new Reboot, my personal favorite work on the site. In their words,

REBOOT is an occasional series of posts on Greater City: Providence where we identify areas of the city that display poor urbanism and propose ways to improve them. Our interventions may be simple and quite easily realized, or they may at times be grand and possibly take years or decades to complete. Either way, we hope they generate interest and discussion.

Past Reboot’s have featured the Providence Train Station and Olneyville Square, among others. Let’s Reboot 111 Westminster and the Arcade, and the whole area east of Dorrance 4.

The original title of this post was unintentionally, overly similar to GCPVD’s post on this issue. I recognized this oversight independently and have edited the title.


  1. I am desperately trying to drop “Downcity” in favor of “Downtown” after training myself to say Downcity. ↩︎

  2. I know I’ve been cranky about the Arcade plans, but I think that’s more about my general mood than the merits of the redevelopment. It could work, and if it does, it will be very exciting for Downtown. ↩︎

  3. I’d love to see traffic closed along both Weybosset and Westminster from Memorial Boulevard down to Dorrance. ↩︎

  4. And 110 Westminster, while we’re at it, the massive condo tower-turned parking lot ↩︎

January 24, 2012

Ted Nesi has done a pretty solid job tracing the history of some awful decisions made by union-dominated boards that resulted in a significant number of retirees in the early-90s receiving 5% or 6% annually compounded interest on their retirement income. These are often called COLAs, or cost-of-living adjustments.

Today, I am inspired by Nesi’s post on the rapid decline of the Providence municipal pension fund health that occurred since 6% “COLA” was introduced in 1989 through today. You see, something has really been bugging me about the conversation on municipal pensions in Rhode Island. A true COLA is key to ensuring that purchasing power is maintained throughout retirement. Essentially, quality of life and ability to buy required goods should be consistent from the day you retire until the day you die. This is a goal that makes a lot of sense. But the cost of goods has not increased 5% or 6% year-over-year ever in the past twenty years 1.

So I chose a key moment in the history of Providence municipal pensions– a 1991 consent decree 2 that then Mayor Buddy Cianci signed, solidifying and legitimizing the extremely high “COLA” for workers. I wanted to know, “What would a worker retiring in the following year (1992) be making today if they retired with a $25,000 annual pension and had a 6% ‘COLA’, 5% ‘COLA’, or a COLA based on the Northeast CPI-U?” Not wanting to make a key mistake and equate a CPI with a COLA, I increased the CPI-U for each year by 25%, figuring that this is a reasonable approximation of the marginal taxes that would be paid on additional income by these retirees.

I suspected that 5% and 6% do not really result in a cost-of-living adjustment, but rather a clear wage increase for retired workers. I have no problem maintaining parity or near-parity with retirement level income, but there’s absolute no reason someone who retired should receive a wage. My support for a true COLA is so strong that I made the adjustment for taxes on income!

What were the results?

Inflation1

A Providence employee who retired in 1992 with a $25,000 pension would be receiving $46,132 in 2011 if their retirement was increased by inflation + the marginal tax rate (assumed here as 25%). But a Providence employee who retired with the same pension in 1992 under the conditions in Providence could expect $63,174 at 5% or $75,640 at the 5% and 6% rates, respectively. This is a MASSIVE difference which cannot constitute a “COLA”.

So I move that we stop referring to these particular pensions as having a “COLA”, because what really happened was a fixed raise was created to last for the rest of retirees’ lives.

Some additional neat facts:

Over 20 years, an individual who has a 6% raise per year will have collected $228,672 more than someone who had a COLA. An individual with a 5% raise per year will have collected $135,681.10 over the same 20 year period.

And of course, here’s the code I used to produce the graph above in R3:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
compound <- function(start,rate,timespan){
  x <- vector(mode = 'numeric', length = timespan)
  for(i in 1:timespan){
    if(i == 1){
      x[i] <- start
    }
    else{
          x[i] <- x[i-1]*(1+rate)
    }
  }
  return(x)
 }
    
inflate <- function(start, inflation){
  x <- vector(mode='numeric', length=dim(inflation)[1])
  for(i in 1:dim(inflation)[1]){
    if(i==1){
      x[i] <- start
    }
    else{
      x[i] <- x[i-1]*(1+(1.25*(inflation[i,2]/100)))
    }
  }
  return(x)
}

cpiu <- cbind(seq(from=1992,to=2011), c(0.0, 2.8, 2.4, 2.6, 2.8, 2.4, 1.4, 
                                        2.1, 3.4, 2.8, 2.1, 2.8, 3.5, 3.6,
                                        3.6, 2.6, 4.0, 0.0, 2.0, 3.0))

inflation <- data.frame(cbind(cpiu[,1], inflate(25000, cpiu), 
                              compound(25000, .05, 2), 
                              compound(25000, .06, 20)))

names(inflation) <- c('year', 'NECPI.U', 'FivePercent', 'SixPercent')
png(filename="inflation.png", height=640, width=800, bg="white")
par(mar=c(6, 5, 5, 3))
plot(inflation$NECPI.U, type='o', col=rgb(0,0.5,0), ylim=c(20000,80000), 
     axes=FALSE, ann=FALSE, lwd=1.5)
axis(1, at=1:20, lab=inflation$year)
axis(2, las=1, at=seq(from=20000, to=80000, by=10000))
lines(inflation$FivePercent, type="o", pch=22, lty=2, col=rgb(0,0,0.5), 
      lwd=1.5)
lines(inflation$SixPercent, type="o", pch=23, lty=2, col='red', lwd=1.5)
title(main="COLA or Raise?\n CPI-U v. Pension COLAs in Providence", col.    main="black")
title(xlab="Year")
title(ylab="Annual Pension in Dollars\n")
legend(1, 80000, c('CPI-U NE + 25%', 'Five Percent', 'Six Percent'), col=c(   'green', 'blue', 'red'), pch=21:23, lty=1:3)
text(1,25000, 25000, pos=3, col='black')
text(20, max(inflation$SixPercent), round(max(inflation$SixPercent), 0), pos=3,     col='red')
text(20, max(inflation$FivePercent), round(max(inflation$FivePercent), 0)   ,pos=3, col=rgb(0,0,0.5))
text(20, max(inflation$NECPI.U), round(max(inflation$NECPI.U), 0), pos=3,     col=rgb(0,0.5,0))
dev.off()

This post reflects my personal views and opinions. I am a member of Local 2012 of the RIAFT and was a supporter of the statewide pension reform in the Fall of 2011. I am also a resident of Providence.


  1. Consumer Price Index Northeast from the Bureau of Labor Statistics ↩︎

  2. See the first link in this post ↩︎

  3. Sorry this code is not well-commented, but I believe it’s fairly straight forward ↩︎

January 22, 2012

The past few months I’ve been learning how to use R. This morning, I decided to try out two first– importing a table of data that is being read of the web and overlaying location data onto a map.

With a little bit of Google skills and just enough R know-how I was able to produce this image:

Homesales

There were a few things that were kind of tricky for me. First, for sometime I couldn’t get latitude and longitude components for the addresses. I figured there was something wrong with the way I was using the *apply class of functions in R. apply() (and the related class of functions lapply, sapply, etc.) are really handy if a bit tricky for beginning R users. This function permits quickly “applying” a function across multiple elements. Traditionally this is done with a loop, but the apply() functions “vectorize” this process (R folks always talk about making your code more vectorized which has something to do with the structure of objects in R but is beyond my computer science skills– essentially, vectorized code runs much faster and more efficiency than loops because of some underlying feature of the language). After playing around with apply, lapply, and sapply, I decided to move back into my “old” way of thinking and just write a loop:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
latlongroll <- function(address){
  lat <- vector(mode = "numeric", length = length(address))
  lng <- vector(mode = "numeric", length = length(address))
  for(i in 1:length(address)){
    latlong <- gGeoCode(address[i])
    lat[i]<-latlong[1]
    lng[i]<-latlong[2]
  }
  return(cbind(lat,lng))
} 

This still didn’t work– I kept on getting a strange out-of-bounds call. So I decided to go down the rabbit hole of regular expressions and try and see if I could clean up my addresses any further (I couldn’t). So, now seemed as good a time as any to figure out how to print to the console while a loop is running to keep track of progress and where exactly my function was stopped. This turned out to be a bit tricky because I didn’t know you had to include a tricky line, r flush.console() in order to get the prints to work. When I figured this out I found out my loop was being caught on my 7th element, a perfectly well formed address. When I ran gGeoCode() on that address only it worked fine. So I thought, maybe Google is bouncing me out because I’m hitting it too fast? And bingo, the final (working version):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
latlongroll <- function(address){
 lat <- vector(mode = "numeric", length = length(address))
 lng <- vector(mode = "numeric", length = length(address))
 for(i in 1:length(address)){
  print(i)
  flush.console()
  latlong <- gGeoCode(address[i])
  lat[i]<-latlong[1]
  lng[i]<-latlong[2]
  Sys.sleep(0.5)
 }
 return(cbind(lat,lng))
}

Other than that, the whole process was pretty straight forward. I have to thank Tony Breyal for posting the functions I used to get latitude and longitude on Stack Overflow. Also, I found the RgoogleMaps vignette to be very helpful, although I wish it had slightly better explained what was going on in qbbox().

Finally, my full source for the above:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# Providence Real Estate Transactions over the last 40 days.
# Required Packages
require('XML')
require('RCurl')
require('RJSONIO')
require("RgoogleMaps")
# Functions
# Construct URL required to get the Lat and Long from Google Maps
construct.geocode.url <- function(address, return.call = "json", sensor = "false"   ) {
 root <- "[maps.google.com/maps/api/...](http://maps.google.com/maps/api/geocode/)"
 u <- paste(root, return.call, "?address=", address, "&sensor=", sensor, sep = ""   )
 return(URLencode(u))
}
# Now that we have the proper Google Maps address, get the resulting latitude     and longitude
gGeoCode <- function(address) {
 u <- construct.geocode.url(address)
 doc <- getURL(u)
 x <- fromJSON(doc,simplify = FALSE)
 lat <- x$results[[1]]$geometry$location$lat
 lng <- x$results[[1]]$geometry$location$lng
 return(c(lat, lng))
}
# Roll through addresses to create lat long
latlongroll <- function(address){
# Initializing the length of a vector dramatically speeds up the code. Far 
# better than reassigning and resizing each time in the loop.
 lat <- vector(mode = "numeric", length = length(address))
 lng <- vector(mode = "numeric", length = length(address))
 for(i in 1:length(address)){
# I kept the print in because this function takes a long time to run so I 
# like  to watch its progress.
 print(i)
 flush.console()
# To reduce the calls, I chose to store lat and long locally before 
# separating the two whereas initially I hit Google for each separately
 latlong <- gGeoCode(address[i])
 lat[i]<-latlong[1]
 lng[i]<-latlong[2]
# I'll have to experiment with the sleep time. I'm certain 0.5 seconds is 
# too long (and this is the bulk of the time spent on the whole code).
 Sys.sleep(0.5)
 }
 return(cbind(lat,lng))
}

# Open to the most recent real estate transactions for Providence on the 
# Projo
site <- '[www.providencejournal.com/homes/rea...](http://www.providencejournal.com/homes/real-estate-transactions/assets/pages/real-estate-transactions-providence.htm')
# Read in the table with the header as variable names.
realestate.table<-readHTMLTable(site,header=T,which=1,stringsAsFactors=F)
# Remove the $ sign before the price
realestate.table$Price <- gsub("([$]{1})([0-9]+)", "\\2", 
                               realestate.table$Price)
# Cast price character as numeric
realestate.table$Price<-as.numeric(realestate.table$Price)
# Cast date string as date type (lowercase %y means 2-digit year, 
# uppercase is 4 digit)
realestate.table$Date <- as.Date(realestate.table$Date,format='%m/%d/%y')
# Dummy transactions or title changes have a price of $1, removing those 
# from data set
providence <- subset(realestate.table,Price>1)
# Removing properties that do not have an address that start with a street 
# number
providence <- subset(providence, grepl("^[0-9]+", providence$Address))
# Add lat and lng coordinates to each address
providence<-cbind(providence, latlongroll(providence[,3]))
# Calculate boundary lat and long for map
bb <- qbbox(providence$lat, providence$lng)
# Gets a map from Google Maps
map <- GetMap.bbox(bb$lonR, bb$latR, zoom=12, maptype="mobile")
# plot the points
PlotOnStaticMap(map,lon=providence$lng,lat=providence$lat)
January 17, 2012

The complicated school day is essentially designed so the minimum number of staff are away from kids at any given time. Some folks are trying to combat this with common planning time and other scheduling gymnastics. These attempts are up against a strong opposing priority– students must be with an adult essentially 100% of the time. Not only that, but the middle of a hectic day focused on teaching is not really conducive to reflection, strategizing, and deep planning on a tight schedule. The temptation to use this precious time to put out the days (or weeks) fires instead of the kind of collaboration and professional learning desired is just too strong.

And quite honestly, I think teachers should get space in the day to vent, grade papers, setup their classroom, call parents, and do all the normal “maintenance” required to keep their classes running. The question then remains, how do we create this dynamic space for professional learning, coordination of services, collaboration on lesson delivery, creative thinking about school structures, etc?

I really think that Travis on Stories from School has it right: use technology to make it easy for teachers, administrators, and other staff to communicate and coordinate. There is no shortage of snake oil peddled to “solve” education in America and one of the most persistent memes is the technological revolution will alter classrooms forever. Technology’s real promise is in schools and not classrooms1. Sharing assessment results and lesson plans, coordinating with interventions being offered to a student, talking to other teachers who have or have had the same student, and more can all be made much easier with technology. Rather than finding the time to meet face-to-face, faculty members can put energy into building relationships around teaching and learning when they have the time. The opportunity to breakaway the time constraints typically placed on synchronous conversation is huge. The opportunity for rich asynchronous sharing is virtually brand new.2I do not want this to sound like pushing for “social media” for teachers, mostly because the technological innovations involved are not a part of the current flavor of Web 2.0 networking. Even the old tools like email, instant messaging, and perhaps the oldest social tool of all, discussion boards, could be extremely helpful for folks.

None of this is new, most of this has been said, yet it seems like too few schools have found a way to leverage existing and inexpensive technology to implement this kind of communication as an essential part of work culture. That seems like a massive missed opportunity that should not be lost while district officials are distracted by quick-and-dirty 4 week online courses for credit recovery.


  1. There are several reasons I’m not optimistic on technology revolutionizing the classroom. My preferred explanation is that the *technology of learning has not changed, *not to be confused with the technology of human machines, but instead the technology of the human machine. Human learning is no different from it was in the past so our technology can only promise new delivery mechanisms for the same thousands of years old approach to teaching and learning. Education can gain efficiencies in delivery that are not to be underestimated. But I think of a revolution as changing a process so dramatically that an observer would barely recognize the process a decade later. The computers and the internet have certainly done that in some fields, but I don’t see this happening in schools. ↩︎

  2. Ok, so I guess teachers could leave a flyer in mailboxes in the past, but come on ↩︎

January 12, 2012

Jessica Lahey wrote an interesting post over on Core Knowledge Blog that I decided to comment on.

After I read back my comment, I realized it would be worth copying over here as it’s own blog post.

The most interesting part of the Chetty, Friedman, and Rockoff study is precisely the most banal- teachers who improve heir students learning as measured by increased achievement on tandardized tests also improve other more distant and relevant factors n children’s lives 1.

This seems obvious to anyone who isn’t vehemently anti-testing. For a large group of the anti-testing regime, there is considerable skepticism that the standardized testing intruments being used by states is a valid instrument for the “real” purposes of education. In fact, the line of thinking in this post is a close relative to this critique. Essentially, what is mathematically reliable is not necessarily valid for drawing conclusions.

CFR in a massive study essentially: 1) Added to a large research base that suggests that teachers can in fact have an impact on standardized test scores; 2) Demonstrate that the impact on standardized test scores are associated with broader, more distant, and, arguably, more important education outcomes; 3) These impacts are persistent throughout the lifetime of students.

While it does NOT make a great case for teacher dismissal based solely on VAM, like the authors are essentially claiming in popular coverage, it does continue to strengthen the case that standardized tests are relevant, reliable, and meaningful indicators of a successful education system. The impacts on social outcomes (teen pregnancy) and economic outcomes (later earnings) show a broad range of important outcomes we expect from schools are strongly associated with VAMs.

A good measure does not have to perfectly describe the intracacies of reality, it just has to give a rough, reasonable, and valid facsimile. CFR is just part of a growing tradition that shows there’s a good case for VAMs to be a part of that image.


  1. Actually, I think that Baker, Di Carlo, nd Dorn are all probably right that the tests for biasness in VAMs or teachers are the most interesting part, but that’s purely from a geeky researcher perspective. I doubt that’ll have as much impact as ther portions of the paper, and probably rightfully so ↩︎

January 6, 2012

I read an interesting article this morning on Israeli schools. Facing extreme poverty among Arab-Israeli’s and the ultra-orthodox, Israel struggles to maintain three separate school systems and succeed. It reminded me of some interesting centralized policy reforms in Israel that have led to great natural experiments. For example, so-called Maimonides Laws which capped class sizes at 40, allowed for some really interesting regression-discontinuity studies on the impact of class size 1.

What I found most interesting in this article, however, was the point made by Jon Medved:

While agreeing Israeli schools need to raise their standards, technology entrepreneur Jon Medved doesn’t think Israel’s test scores tell the whole story. He says informal education, through the military, youth movements, and extracurricular activities, builds skills. He also praised programs for gifted children.

Consequently, Medved says he isn’t worried that Israel’s tech-driven economy will slide because of deficiencies in the school system.

“While I think it’s important to sound alarm signals, I haven’t heard from tech companies … ’the employees we’re getting are not educated,’” Medved said.

Two thoughts came to mind. First, we certainly are hearing in the United States that employers are unhappy with the caliber of candidates for open positions. It’s unclear that in the US this is a result of low aggregate skills or just tremendous mismatch between the skills that are attained and the skills that are now valued in the marketplace. Second, Israel’s universal military service provides several additional years of intense training and skill attainment even before college. Considering the broad range of roles one can take on in the state military ((For example, social workers in Israel are basically military trained, given job experience, and then set out without a need for 5 years of schooling, although I believe many do receive at least a bachelor’s)), it’s hard not to see Israel’s mandatory service as a massive vocational educational program.

I don’t have much analysis here, but mandatory public/civil service is an interesting concept that this article will keep me thinking about over the weekend.


  1. www.economics.harvard.edu/faculty/s… Angrist has also done several other studies in Israel with Lavy, a nice little summary of which is available here www.nber.org/reporter/… ↩︎

December 18, 2011

If you’re just beginning to use R and want a quick and easy way to make some charts/graphs, etc, GrapheR is a great package to quickly produce high quality plots through a self-explanatory GUI. Here’s an article in The R Journal today.

My only complaint is that GrapheR does not appear to have a way to export the code that produced the graph, which would be a very helpful feature for a beginner who wants to learn the guts of producing publication quality charts in R.

To install and then run…

1
2
3
install.packages('GrapheR')
require(GrapheR)
run.GrapheR()
December 13, 2011

Here is a doozy.

What is the purpose of the GED? Is it a market signal, indicating your employability to hiring agents or does it follow a human capital path wherein earning a GED is actually a process that increases your skill set?

The House wants all unemployed workers without a high school diploma to earn their GED. Their explanation is assuming that the GED works purely through human capital theory, e.g. currently unemployed workers who don’t have a high school diploma will earn more skills that make them increasingly employable by enrolling in and earning their GED. Is there any evidence for this?

If we look at what the research says 1, we learn that the GED is only effective at boosting earnings of lower skilled recipients. Tyler suggests that this may indicate that the GED is acting as a labor market signal, demonstrating to future employers that they have the work ethic and extra motivation low-skilled high school drop-outs are believed to lack and are worth hiring. This is supported by the fact that not only are low skilled GED earners the only ones who see greater earnings and employability as a result of earning a GED but also by the fact that there is a lag to the “GED impact”. One possible mechanism for this lag indicates that the GED is a signal for unobserved characteristics that suggest an employee is ready to learn and earn those additional skills they do not current possess, and therefore, low-skilled workers who earn a GED are more likely to be placed in a position where they will have the opportunity to increase their human capital and future earnings. Since high skilled employees don’t seem to gain any benefit from a GED, that suggests that their existing skill set already sufficiently operates as a signal of employability such that there is no need for an additional signal of initial readiness for hiring. Even if this were the case, if the GED was truly ruled by a human capital model, we would expect high skilled GED recipients to have become increasingly skilled through the GED process and, therefore, they should also benefit from increased earnings due to receiving a GED.

So what does this all mean? Let’s remember that the unemployment benefits being offered only exist for 99 weeks now (and would be lowered to 59 weeks in the House proposal). This means that all the unemployed we’re worried about were in fact employed within the last year and seven weeks. Do we believe that those who were employed as recently as one year ago, already deep into the economic downturn, who do not have a high school diploma fall into the low or high skilled group? It seems obvious to me that if you were employable 59 weeks ago you would almost certainly be in the upper half (if not higher) of the skills distribution among those who don’t have high school diplomas. So the result of the GED policy is likely going to lead to no benefit for these workers and will probably even decrease their earnings since they will have some costs associated with earning the GED– be it the fee for a course, the fee to take the test, the opportunity costs associated with spending time studying and working toward a credential that has no benefit, etc.

Worse, by dramatically changing the contexts where folks earn a GED, we’re likely to completely change what signal the GED will send future employers. In essence, the impact is unpredictable, but it’s hard to see a path whereby earning a GED increases in value as a result of policies that make certain that GEDs will be earned more broadly and under duress and not  by voluntary action.

So forget about all the other stuff you read on the GED requirement. We don’t have to worry about fairness, discrimination, or just plain shitting on people when they’re in some of the most dire straits they’re likely to see by adding even more to their burdens. The GED-only policy on unemployment benefits is simply unlikely to do anything other than transfer resources from the unemployed to the American Council on Education and companies that have GED prep classes.


  1. John Tyler was a former professor of mine. If Brown had a PhD programming in education, I would go back in a heartbeat to work under him. ↩︎

December 11, 2011

So I wrote a harsh post after reading a harsh article by Kevin Carey in The New Republic about Diane Ravitch. I still standby what I said. Namely, I’m very cautious about trusting Ravitch as a reliable narrator of history because I’m:

  • unfamiliar with good historiography/methodology so it’s hard for me to judge the quality of her work simply from the product itself
  • unaware of a rich discourse around education history in NYC and 20th century America in general that wrestles with, or even corroborates, Ravitch’s account
  • certain that Ravitch’s more recent writings often mischaracterizes the power and meaning behind quantitative research and exhibits selection bias to fit a particular narrative
  • generally distrustful of public academics, particularly when their writing is mainly outside of their primary discipline.

That being said, there have been several well-written critiques of Carey’s piece and I thought it’s only fair that I link to them to present a more complete picture of what many folks, some who agree and some who disagree with Ravitch’s current ideology, think of Ravitch’s work.

Mike Petrilli is certainly no fan of Ravitch’s rebirth as the anti-choice, anti-accountability voice du jour. But his piece in Flypaper in response to Carey is quite clear: the idea that Ravitch’s personal life  had an impact on her criticism of then NYC Schools Chancellor Joel Klein is unfair and wrong. As someone who worked directly with Ravitch and who had, independently, overseen the awarding of a grant to Mary Butz’s leadership program, Petrilli sees a different line of thought. Ravitch, in his view, simply correctly pointed to the flaw in Klein’s “clear the field” approach that tended to cut down successful or promising programs alongside the dead weight.

Dana Goldstein’s response suggests that Kevin Carey ignored the context in which Ravitch wrote. Goldstein suggests that Ravitch had to fight against a sexist academy in a discipline that increasingly had taken on a polemicists tone, as a liberal who did not quite fit the mold of her times. These factors combined to generate the type of histories and writing that Ravitch would produce and are critical in understanding, without undermining, her work.

Finally, Diane Senechal writes in The New Republic today that Ravitch’s history is a far more balanced critique than Carey would have you believe, very well documented, and self-consistent. She does concede that Ravitch writes with a fiery, decidedly non-academic tone that’s intended as a public intellectual. But here, Senechal views this as a strength, “arous[ing] general interest in matters that might otherwise seem out of reach or obscure.” Ultimately, Senechal’s main point is that Ravitch’s work is of very high quality and thorough and that her tone should not overshadow the accomplishment of her scholarship.

December 10, 2011

I’m mostly writing this post because I had a fairly hard time finding a resolution to a real pesky error. For some reason, my iPhone 4S was recognized by Picasa but always failed to import photos. Whenever I tried, the Picasa was clearly scanning through the files and then presented this error:

An error has occurred while attempting to import. Either the source is unavailable or the destination is full or read only (1).

The resolution was found on this page posted by Tradeinstyle. A slightly more thorough explanation of the solution below.

If you are seeing this error, what appears to have happened is that several images are “corrupted” in some way on your iPhone. Unfortunately, this requires opening up iPhoto. Once in iPhoto, you should be at the import screen and see all the pictures available on your phone. Several of these pictures will have a thumbnail consisting only of a dotted-line forming a square– a blank thumbnail. You’ll want to import these photos and, after clicking import, be sure to select the option that removes them from your iPhone. Now move these imported photos from your newly create iPhoto library into your normal Pictures folder (or wherever you’re watching for pictures in Picasa). They’ll load just fine. Exit iPhoto, delete your iPhoto Library (probably located in ~/Pictures/iPhoto\ Library) to avoid duplicates and open Picasa. Because the “offending” pictures have now been removed, Picasa should be able to easily import your photos.

This problem does not appear to be specific to the iPhone 4s and is probably applicable to all iOS5 devices.

December 7, 2011

I am becoming increasingly frustrated by the failure of all the major players to get social right. I have a very simple dream for how the social web should work and its baffling to me that many obvious use-cases have not been addressed at all by Facebook, Twitter, or Google.

This is the first of two posts that will describe what I view as a viable framework for a social web experience. The whole goal of social web, in my view, is to read, share, discover, and communicate about found content. This post will focus on finding and reading content. The second will focus on sharing and discussing that content.

Properly Handle Content Sources

One of the major shortcomings of Facebook, Twitter, and Google+ is source content. The backbone of the ideal social experience is not simply sharing inane details of your personal life. It’s making the entire web a community activity. It’s about making communication on the internet as rich and natural an experience as possible. The branded pages and official accounts simply do not substitute for an excellent content platform. The origin of this problem is simple– the modern social network is entirely built upon connecting people, and content generators are just considered a hacked up special class of people. Reading (and generating) content is the ground floor of the social experience.

What’s my evidence that this model is insufficient? The popularity of services like Google Reader, Flipboard, Feedly, etc. Need more? The three major social players are all introducing new ways to bring content sources into their services and keep my eyes within their system. Facebook has its Social Reader, which is just creepy to me because I can’t share outside of Facebook and I don’t want everything my eyes glaze over to be shared instantly with everyone. Twitter has its “Discover” tab that goes well beyond trending and tries to create a pre-curated reader experience. Google has Currents, a Flipboard clone that’s based upon casual magazine style reading, complete with a whole new set of subscriptions, a good mobile experience, and easy sharing into Google Plus.

But none of these solutions recognize that there are at least three major domains of access content.

Bookmarking

The first way people find content is from sources they want to read casually. These are the sites you check when you’re bored or when there’s a massive breaking story. This is your New York Times or CNN.com pushing out massive amounts of timely information that you just want to dip into time to time. This is one form of content that folks are just starting to get right. Flipboard/Google Currents successfully gives a gorgeous platform for casually reading across many sources. There’s no need to keep track of every story or go through content methodically. This reading experience is quick and casual and all about stumbling across something.

Collecting

This is the bread and butter for RSS subscribers and one of the major areas that most social players are ignoring. Collecting means you want to read everything someone or some site writes. You want to make sure to come back and glance over things you don’t get a chance to see. Read/unread counts are a critical piece of making sure you read every piece of news that comes through. This is content reading you want to tag, save, easily search through, etc. Another way to think of collecting is the set of information you trust only yourself to sort through a curate. This isn’t your list of pretty recipes that come streaming in quickly and are throw-aways. These are your trusted insider industry areas that get at the heart of your job or most important hobbies.

Streaming

This is the traditional “news feed” of social reading. It’s how you see what all your “friends” are sharing, doing, and saying. This is about finding the trends, the conversations that are blowing up, the short funny statements, etc. You almost definitely don’t care if you miss something someone posts here, but you want to be able to see the cream that rises to the top. You want a way that conveniently allows you to enter someone else’s workspace and interact with the content they’ve shared with you. This is the other area that social has some models that work well and was the basis for all other activities on the social web. It’s also one of the social webs major problems. The stream is massive and cannot be absorbed in whole. Most of the content is throwaway. But because almost all other social reading is based on the stream, all of our content, even what we collect and bookmark, becomes throwaway and short-lived, given the same priority as your long lost aunt in an old age home playing Farmville.

For me, the social web has to start with providing me with a single space that aggregates what I want to read on the web. If content is not easy for me to get access to, I’m never going to even worry about being satisfied with my options for sharing. Social fails right now because they haven’t gotten the reading paradigm right. It’s nowhere close to handling the three major domains–bookmarking, collecting, and streaming– effectively under a single attractive interface. I think Google is the closest. If you combine some of the features in Currents, their new attractive mobile reader that works great for bookmarking, with some of the critical features in Google Reader, right now the best collector, then I’m very close to an ideal reading experience. My hope is that someone will find a way to combine all three and then provide the robust sharing features I’ll write about in my next post.

November 28, 2011

I am glad that Philissa Cramer is reporting on some of the deeper details of the Special Education Student Information System 1 implementation at the New York City Department of Education here. Many people don’t really understand the ins and outs of government contracting. Folks really think NASA designs and builds the lunar module, for example, instead of realizing that they issued an RFP and contracting with Northrop Grumman to do that work for them. Similarly, in education, especially around complex technology projects, most districts and states purchase products or services through a bid product rather than develop solutions in house.

However, I am a bit disappointed in the angle that City Limits, (Ms. Cramer’s source) took in their reporting. There are real problems with government contracting, but they really mischaracterize the story around SESIS in an attempt to simplify the issue for casual readers. City Limits acts as though it is surprising, or even deplorable, that an RFP was awarded to a company with a largely existing product.

They point to the fact that Maximus, the vendor for the SESIS contract, was modifying an existing product to meet the requirements outlined in NYC DOE’s RFP as though this was clearly bad. City Limits uses words like “revealed” and “simply” to describe what Maximus was offering. This just ignores the reality of government contracting and shows disregard for risk mitigation. Almost all government agencies handsomely reward companies that can point to successes in developing and implementing solutions that can meet many of the requirements outlined in the issued RFP. The government wants to hire people it believes can do the job and do it well and often one of the best ways to make that determination is to see that someone has done it before. In almost all cases, this means selecting a vendor who has an existing product or process for meeting many of the requirements in the RFP that will be expanded upon or modified. But government purchasers don’t just want experienced partners, they also want to leverage efficiencies by not paying for duplication. One of the major reasons for purchasing an existing product or contracting with a vendor is that school systems actually aren’t that different from one another and the basic functionality and organizational structures required in an IT solution are shared across schools, districts, and states. Why pay substantially to build the same basic software infrastructure that already exists elsewhere? It’s a waste of money most of the time.

City Limits then goes on to criticize the massive increases that can occur due to change orders. This is a serious problem with government contracting, but they fail to really explore why. A change order occurs when the client wants new or additional functionality that was not included in the initial contract. Their frequency and expense are not an example of why government agencies should not contract with outside vendors, rather, they demonstrate just how poorly bureaucracies are at managing large-scale complex projects. Change orders happen due to several failures, and almost all are the government agencies’ fault. In no particular order, the government agency:

  • failed to do proper discovery before issuing the RFP and, therefore, missed major functional requirements that are not identified until more intensive discovery occurs during development or initial implementation;
  • agreed to a contract that was far too specific and did not allow for the reality that requirements do evolve over time (though often not in ways which substantially change the nature or quantity of work);
  • agreed to a contract that was far too vague such that the vendor can claim to have delivered a product or service when they did not meet already identified functional requirements for the system;
  • did not take into account the preparation and costs required to sustain the product beyond the life of the initial engagement with the vendor.

These are the main things that lead to change orders. If the government agency is doing a top-notch job, they can all be avoided and the only occasion for a change order should be large external shocks that dramatically alter the functional requirements, intentional decisions to move away from the initial functional requirements that weighed the costs of altering the vendor contract, and a desire to extend and expand a relationship because of the success of the initial implementation resulting in a substantially more advanced or mature product.

It is really hard to manage vendor contracts right. It requires actually knowing what you want to buy before an RFP is issued (or recognizing what is and is not known and correctly assessing the scope of the impact future decisions on unknowns will have on the work). It requires a really good team of lawyers to fight outside forces that literally make their profits on carefully abdicating as much responsibility as possible at the contracting phase. It requires selecting good partners that are adequately willing and prepared to evolve and work with the agency as their needs and knowledge grow and mature. It requires an honest assessment of future resources that will be available to assure sustainability of large investments. And perhaps most difficult of all, it requires strong project management infrastructure throughout the entire agency to ensure alignment and consistency across multiple products produced internally and by multiple external partners.

The benefits of outsourcing products can be huge and are worth leveraging. Vendor contracts are difficult to manage, and bureaucracies are not always well-suited to managing these projects, but all government agencies struggle with getting this right.

One last parting thought…

In my view, the most challenging aspect of getting vendor contracting right for government agencies is spending ample upfront time, even before issuing an RFP, articulating the functional needs that a solution must meet in detail. I feel that often public sector employees are so intimidated by the arduous process around issuing and awarding an RFP that they rush to get an RFP out there and worry about the details during contracting and product initiation. Whenever possible, resist this temptation at all costs. Whether custom designed internally or provided by an external vendor, satisfaction is dependent upon clarity of the desired outcomes. This is particularly true with technology projects. Vendors will always produce something, but whether the solution is any good is almost entirely up to good requirements gathering.


  1. For pretty good coverage check out all of Gotham School’s posts http://gothamschools.org/tag/sesis/ ↩︎

November 27, 2011

UPDATE: You should also read this page on jasonpbecker for some very strong and interesting rebuttals to Carey’s article that I commented on below.

Do read this article on Diane Ravitch. I personally have two major criticisms of Ravitch, both of which Carey exposes eloquently.

First, I believe that she leverages her respect and expertise as an historian and professor to present herself as an experts in areas of academic research and policy where she has little expertise. This is very common with public intellectuals, and I think it’s deceiving and deplorable.

Second, I am unsure about whether she is a reliable narrator of history because my impression is that she’s the “best in the game” at least in part because so few are playing. I don’t personally have the skill to judge her histories and given her blatant academic dishonesty in so many other areas where I have some ability to judge quality, I find it hard to view her as an honest operator.

What is somewhat new in Carey’s take on Ravitch, and what I think most here on Plus will find interesting, are two revelations. First, one I was somewhat acquainted with, it seems possible that some of Ravitch’s shift to rhetorical vitriol against someone who seemed a natural ally (Joel Klein) may be partially attributable to a personal dispute involving Ravitch’s “partner” (this and other articles seem to be intentionally ambiguous about the nature of this relationship). Second, and most interesting to me, it appears that Ravitch doesn’t have the typical academic acumen of an acclaimed scholar in her field. In fact, it appears that Ravitch has produced almost exclusively popular history throughout her career. This detail in particular plays into some of my very concerns about the reliability of her historical narratives.

On a side note, I think if I could be one person in education policy today it’d be Kevin Carey. He’s smart as hell and an excellent writer, even if I disagree with him on higher education issues.

November 5, 2011

I love that Providence is pursuing a streetcar. There are really just two things I don’t understand about the Core Connector’s proposal. I’m going to tackle one in this post.

Why is the entire streetcar route shared with general traffic with no dedicated right-of-way? Truthfully, this isn’t a massive issue except in the core part of Downcity where there is substantial traffic during rush hours along the street car route. But this makes the plan even more perplexing because it’s precisely this portion of the route where an obvious solution for dedicated light rail ROW exists- Westminster Street.

Quick and dirty Core Connector modification

The red route above represents the proposed streetcar line. The green line represents Westminster Street, a narrow, single lane, one-way street that cuts through Downcity and in front of restaurants, boutique shopping, URI, etc. It brings the streetcar line slightly closer to Johnson and Wales and slightly further from the Dunkin Donuts Arena and Rhode Island Convention Center. While there is real automobile traffic, this is almost entirely for two reasons. First, Westminster has substantial on-street metered parking. Second, Westminster is the East->West one-way to counter Weybosset’s West-East.

Of course, the two major Downcity planning projects underway are removing the stress that leads to both of these uses. The Downtown Circulator project is nearly complete, converting Empire and Weybosset to two-way streets. Automobile traffic will almost certainly take the wider and faster Washington and Weybosset Streets, adjacent to Westminster, unless the goal is to find on the street parking. The second project is the Core Connector itself, which provides more options to get into Downcity without a car, hopefully reducing the need for parking. There are also substantial parking capacity that’s underutilized in the many surface lots and parking garages in the area.

As far as I can tell, there is really no need for Westminster to have street traffic. A dedicated ROW will increase the speed and predictability of the streetcar. Additional pedestrian space along Westminster could quickly be used by the cafes and restaurants and street vendors that already are in the area. The only reason I could come up with for not using Westminster as a dedicated right-of-way for the streetcar is the need for a turn in or around Kennedy Plaza. There are so many options for moving between Washington and Westminster that I just can’t buy this as an insurmountable challenge.

I was unable to make it to the three recent public meetings about the proposed route. If I were there, this would certainly be my first question.

October 15, 2011

There are lots of things that are misleading about this story published on GoLocalProv. It is utterly ridiculous to report numbers like how many total tax dollars are being collected by different communities for the sake of comparison. You cannot compare a total number like this which is so dependent upon things like, I don’t know, the dramatic difference in the size of these communities?

In the 2010 Census, Providence had a population of 178,042. New Shoreham had a population of 1,051. Is anyone surprised that one of these communities is on the top of the list and the other on the bottom?

There’s more than size at play, but the least GLP could have done was to correct for population and present the per capita levy. All it takes is one quick Google search and we can get the 2010 Census numbers which are probably pretty damn close to the current population so we can get a decent, somewhat level playing field to compare cities and towns on. Here’s the 2010 Census numbers from the RI Department of Labor and Training.

So with 5 minutes in Excel (thanks, GLP, for making your charts images instead of tables), here’s a much more interesting list:

**Community** **FY12 Levy** **2010 Pop** **Per Capita**
New Shoreham \$8,187,149   1,051 **\$7,790**
Jamestown \$18,653,102   5,405 **\$3,451**
Barrington
</td>
<td>
  \$55,162,905
</td>
<td>

  16,310

$3,382
East Greenwich \$44,015,850  13,146 **\$3,348**
West Greenwich \$17,703,664 6,135 **\$2,886**
Little Compton \$10,004,530   3,492 **\$2,865*
Narragansett \$44,732,180   15,868 **\$2,819**
Westerly \$63,547,705
</td>
<td>

  22,787

$2,789
Charlestown
</td>
<td>
  \$21,611,447

</td>
<td>

  7,827

$2,761
Portsmouth \$45,807,376  17,389 **\$2,634**
Warwick \$216,867,072 82,672 **\$2,623**
Middletown \$41,588,607 16,150 **\$2,575**
Newport \$63,519,526 24,672 **\$2,575**
North Kingstown \$67,598,341  26,486 **\$2,552**
Scituate \$25,492,269 10,329 **\$2,468**
Lincoln \$51,960,896 21,105 **\$2,462**
Foster \$11,221,591 4,606 **\$2,436**
Johnston \$68,570,772  28,769 **\$2,383**
</td>
</tr>
<tr>
<td height="12">
  North Smithfield
</td>
<td>
  \$27,592,721
</td>
<td>
 11,967
</td>
<td>
  **\$2,306**
</td>
Smithfield \$49,357,148  21,430 **\$2,303**
Tiverton \$35,771,014  15,780 **\$2,267**
Cranston \$180,715,853  80,387 **\$2,248**
South Kingstown \$66,120,832  30,639 **\$2,158**
Hopkinton \$17,630,988  8,188 **\$2,153**
Glocester \$20,971,276  9,746 **\$2,152**
North Providence \$67,218,014  32,078 **\$2,095**
Warren \$21,971,276  10,611 **\$2,071**
Richmond \$15,705,615  7,708 **\$2,038**
Exeter \$12,619,379  6,425 **\$1,964**
Providence \$324,460,407  178,042 **\$1,822**
West Warwick \$52,337,257  29,191 **\$1,793**
Coventry \$61,860,355  35,014 **\$1,767**
Cumberland \$57,890,766 33,506 **\$1,728**
Burrillville \$26,687,010 15,955 **\$1,673**
Bristol \$35,697,780 22,954 **\$1,555**
Pawtucket \$96,340,757 71,148 **\$1,354**
Woonsocket \$53,984,558 41,186 **\$1,311**
Central Falls \$13,148,778  19,376 **\$679**
October 14, 2011

I wondered to myself If I could explain these two movements in a few sentences. Is this fair?

The Taxed Enough Already (TEA) Party movement is a response to two large government spending packages, the “bailout” and the “stimulus” package. These people felt that it was inappropriate for the government spend taxpayer money (and foreign debt) in an attempt to prevent deeper economic damage from the collapse of the real estate bubble.

The Occupy Wall Street movement is a response to the same two large government spending packages as well as the subsequently ineffectual American government during the first term of the Obama Presidency. These people are skeptical that the “bailout” and “stimulus” package addressed the challenges that the vast majority of Americans face every day in favor of addressing the needs of an elite economic and political class.

 

October 5, 2011

I wanted to write a lot more about this, but I just don’t have the time.

This storyis about rezoning schools in downtown Manhattan which is struggling to meet the demands of emerging residential neighborhoods. Reading this story (and struggle) just brought up something I’ve thought about for some time now.

The cost of school buildings is ridiculous. Schools are generally built for one purpose. They are generally built to last a very long time. They are generally built to a quality standard that suggests it will perennially be far too expensive to knock down and start over even if renovations are obscenely expensive and inadequate. In most areas (dense urban cities are probably the exception), we build schools on large plots of land with field/park space attached. This land is technically for public use, but in the name of safety for children, land uses are far more restrictive than most public parks.

It all just seems like an absurd setup that wastes countless public dollars. Why wouldn’t we want to have smaller schools in mixed-use spaces that represent far less capital investment and introduce substantial budget flexibility as enrollment patterns change? Why would we want to build separate libraries from existing public resources? Why would we want separate fields rather than bringing students to truly public spaces during the day?

The school house as a public space that’s isolated and locked away from the community that builds it, the school house that’s on a 100-year bond designed in such a way that any conversion to other uses is very unlikely… isn’t that school house a bit anachronistic?

September 25, 2011

As a resident of Downcity, I have been closely following the development of Providence’s Core Connector Study. The official route and payment options have now been proposed, as reported in the Projo.

Route

I’m pleased to see that they prioritized frequent service through the main ridership areas (College Hill to Jewelry District and hospital) were prioritized over service to the train station. Jef Nickerson says it best over at GCPVD– the train station is out-of-the-way and would dramatically increase rider time while having unclear implications for ridership, and the station is already very well served (and could easily be better served) by existing bus routes. The streetcar is really about moving people within Providence and providing a permanency to the connectivity between the current (Brown and hospital) and hopefully future (Jewelry District) economic engines of the city.

The proposed route uses Washington and Empire Street– both are wise choices. Washington Street adds the Biltmore, Lupos, URI, and AS220 directly to the route while keeping the Convention Center and the Dunk near by. Washington also has the advantage of a direct route over 95 for a possible South-Westward expansion in the future with limited cost and slow downs due to a lack of turns. Empire Street is also a great choice. The road is only about to undergo construction to be expanded for two-way traffic (one of the last legs of the Downtown Circulator project). The corner of Washington and Empire anchors the streetcar at the Providence Central Library, Trinity Rep, AS220, 38 Studios, and Hasbro’s new Downcity location. Regency Plaza adds more residential ridership and the massive parking lot across from the Hilton suddenly looks more attractive for infill development. It doesn’t hurt that I live at Westminster and Empire and would be excellently served by this location. Overall, I believe this path through Downcity is the easiest to manage while anchoring the streetcar near major hubs of activity. I only wish they could have found a way to bring the streetcar down Westminster and close the street to personal vehicle traffic, but that was never a likely option.

The Jewelry District part of the route never seemed as controversial to me because there were limited solutions that  were somewhat obvious. The decision to use Chestnut makes sense and solidifies the Westminster-Chestnut Street path as a strong North-South connector through the area. Not going all the way to Prairie Avenue is likely going to anger some Upper South Providence residents, but I’m not convinced this is a bad thing. At the Knowledge District Development Framework meeting it was clear that increasing paucity between the two sides of Prairie Avenue was a major goal of development in the hospital area. Forcing people to walk a bit on Dudley Street may generate the kind of foot traffic needed to make infill in that area with first floor retail a lot more attractive.

The Tax

I’m in favor of it. This one is a no-brainer in my mind. Property values will certainly increase in the areas being served by the streetcar and therefore current owners stand to have some significant gains in equity if this project moves forward. The attractiveness of living where I am has increased tremendously for Brown faculty and staff, medical students, and some of the entrepreneurs and their employees if they ever materialize in the Jewelry District. That I should have to contribute some of this gained equity back to get the project built makes sense. The question is, will the requested tax be too high to be worth it? So let’s do the math. The proposal will hit me with $0.95 on every $1,000 of property value. Let’s assume that the homestead exemption will not be applied to this tax. Let’s also assume that I’m looking at a 15 year stay Downcity. This is reasonable because most of the properties around here are condos that are one or two bedroom and are not attractive to folks with families– we’re filled with young folks and empty-nesters who aren’t likely to be looking at this like a 30 year investment. Let’s also assume that the economy will continue to stagnate over this period so we only see an inflation rate of, say, 2%. It’s likely that this is an overly pessimistic estimate that will increase the cost. What I’m interested in is the present discounted value, or the cost to me due to this tax if I were to incur it all up front. The theory goes that money today is worth more than money tomorrow because money depreciates in value due to inflation and because money today can be invested and will grow over time. We calculate PDV much like you would calculate compound interest. The final piece of data to calculate the PDV needed is my home value. Let’s assume it hasn’t moved at all since I purchased about a year ago, which would peg my condo at $168,000. Now I want to know if the PDV of the tax will exceed what I believe is a reasonable estimate for the increase in equity I will realize because of the project. And the PDV is…

$2,050.74

I think it would be hard to argue that my property value won’t increase at least 1.2% because of this project. Adding the line, “Steps to the Providence Streetcar that will take you to Brown’s Medical School, through the Knowledge District, and to the Hospitals or through Downcity to RISD and Brown,” is going to be worth more than that, period. I can’t imagine the calculus is dramatically different for other Downcity property owners which means for us, this makes “cents”.

September 15, 2011

If you’re interested in education, I highly recommend Justin Baeder’s1 “On Performance” blog hosted over on Education Week.

Today, he ended his post with a question, “I would be very interested to learn of any other sector that has achieved substantial performance gains by reforming its evaluation processes. We’re putting a lot of eggs in the ‘improve teacher evaluation to improve student learning’ basket, but no one even seems to be asking whether this strategy has any merit.”

I think this is the write idea but the wrong question. What we should wonder is whether any other sector has achieved substantial performance gains by reforming its entire process for hiring, retaining, supporting, and terminating its employees when that sector started with an extremely rigid, non-differentiated structure. Teacher evaluation is about providing better professional growth opportunities targeted to an individual’s needs. It’s about rewarding folks who are doing a stellar job and making sure that you can reward mission-critical people who might otherwise leave for other opportunities. And, much to many union members’ chagrin, it’s also about providing substantial a substantial and trusted evidence base that principals can turn to justify termination decisions.

Ask your favorite policy professional or administrator why they are pushing for centralized, mandatory, and prescriptive forms of teacher evaluation. I can guarantee they’ll include the current lack of serious evaluation in schools. I would bet that most folks also are pushing for these policies as a proactive step to make sure they can win union-based challenges against performance-based terminations and reassignment. Because the teacher unions are so strong and are largely steadfast in their need to treat all teachers equally2, policymakers feel like they have to wrap evaluations in as much novel social science and standardization as possible so that they have even the tiniest chance in hell of holding up in court. To what extent can the lack of robust evaluation be connected to school leaders’ lack of self-efficacy for action on this information?

Teachers fear that a world without these protections would produce unfair evaluations and termination procedures that are subjective. Secretly, I bet that most policymakers would be totally comfortable not pushing hard for value-added models and overly specific observation rubrics. So long as they felt confident they could take action in response to the evaluations, the current evaluation hawks would instead be willing to leave much more to individual professional judgment3. If the primary relationship in a school building was professional, and not a unionized labor-management split, a lot of the current evaluation policy might not be necessary. In the very least, the policies could be less centralized. But ultimately, professionals are held accountable for their work quality by bosses that employees respect as professionals.

I’ll end with one final thought: I wonder what the teacher evaluation narrative would be like in an alternative history where there was no split between the teacher unions and professional organizations of education administrators and professors.

Important note: while I do work at a state department of education, I am not directly involved in nor am I intimately familiar with our teacher evaluation model or policies. As an employee of the Rhode Island Department of Education, I am also a member of the American Federation of Teachers Local 2012 union. The thoughts I’ve expressed in this post are entirely my own and does not represent the AFT or RIDE’s position.


  1. Per his EdWeek Bio, “Justin Baeder is a public school principal in Seattle and a doctoral student studying principal performance and productivity at the University of Washington. In this blog he aims to examine issues of performance, improvement, and the changing nature of the education profession.” ↩︎

  2. One day I’ll write about the irony of equality of treatment for education professionals. It’s strange that our thinking around funding has largely evolved from “equality” to “adequacy” but not our treatment of adults ↩︎

  3. Related important issue to solve– low principal quality undermines this possibility. One day I’ll write about my belief that the principal role is poorly designed and dooms most people to failure. Rethinking the building principal is a critical structural reform folks will be hearing more and more about ↩︎

September 14, 2011

As an undergraduate I largely avoided political science because I couldn’t imagine getting interested in reading The Republic, Leviathan, or Wealth of Nations. Political philosophy, and philosophy in general, just seemed like a horrible painful exercise, so I avoided it. Of course now that I’m involved in public policy and not organic chemistry, it feels as though I’ve done a horrible disservice to myself by not going through and systematically exploring more fundamental questions about the role of the state, ethics and morality, justice, etc.

Part of my personal re-education in this area has been much easier by having access to a host of well-written blogs that host great conversations about these issues. These sources are smart, generally trustworthy, and are generally collegial. By reading actual academics apply their knowledge to current events, I am able to get access to a much more sophisticated conversation than is available in most popular media.

One of these sources is Bleeding Heart Libertarians, which seeks to explain how libertarians can have robust participation in social justice. This is a particularly interesting topic since, as I understand it, one of the major critiques of libertarianism is that it does not address social justice in a comprehensive and sufficient way.

Today, commenting on Ron Paul’s response to Wolf Blitzer’s baiting on healthcare1, BHL contributor Professor Roderick Long brought up one of the libertarian arguments that most confounds me– charity and mutual aid. Long writes that a libertarians second response to an individual’s failure to use basic services ((Specifically, Blitzer presents the case of a healthy young man who foregoes health insurance. However, Professor Long’s suggested response is sufficiently vague that I believe it is safe to say that he would apply the same three stages to any situation where an individual’s circumstances or decisions have jeopardized their access to basic needs. This includes all social safety net programs.)) should be, “talk about how charity and mutual aid are more efficient than government welfare, and how we therefore need to shift the venue of assistance from the latter to the former.”

This argument had always felt extremely classist to me. It seems that those who are most vulnerable have never been the folks who have access to mutual aid or charity through local community organizations, family members, friends, and other contacts. General social capital aside, even people who have strong community ties who are most likely to need access to a social safety net live in communities that overwhelming don’t have the collective resources to offer sufficient aid to promote the welfare of that community. The whole concept seems steeped in a highly culturally informed sense of reality that imagines a small town church community as opposed to the generationally impoverished’s reality.

It’s not that I’m not sympathetic to the argument that it is possible that models of mutual aid and charity could ultimately private superior resource allocation, it’s just that I don’t think that aggregate efficiency is the goal here. I realize that this is a statement of prior moral conviction, but it seems to me that the ostensible purpose of safety nets is to make sure the most coverage against a failure to meet the basic needs of all people. Under this situation, efficiency is desirable but secondary, and I don’t see how mutual aid and charity will provide sufficient coverage to meet the needs of the most important beneficiaries of these policies.

September 13, 2011

For several years I considered switching to Apple. I’ve been a wannabe believer since OSX. I was using Linuxas my every day operating system. This was one part about learning, one part frustration with Windows perpetual funky instabilities, and part a growing appreciation for things like the command line interface and free and open source software. OSX offered many things I liked– a great CLI I was already getting intimately familiar with, rock hard stability, beautiful graphic effects like those I enjoyed with Compiz, etc. More importantly, OSX could do all this while providing me with a decent experience on some of the software I’d love to dump but simply could not like Microsoft Office. Additionally, I wouldn’t have all kinds of problems on the web surfing pages that were supposedly platform neutral and using browsers that were supposedly cross-platform (Flash on Linux was a joke as recently as two years ago when I abandoned Linux as my every day operating system). Of course, perhaps first and foremost, I would never have to worry about whether an update will cause a conflict because I was forced to use a deprecated driver to get my hardware to work or that some of my hardware would be limited because there wasn’t a fully functional driver available. But every time it came to a decision on purchasing a new machine, I could not bring myself to pay the “Apple tax”. I was a student, and while I’ve generally felt that the Macbook Pro/Powerbook have had reasonable economics and a physical design well beyond their competitors, I just couldn’t justify the price to power ratio. So I continued to build my own desktops and work on an IBM Thinkpad I would buy after finding a good deal.

A year ago that changed when I purchased a 13" Macbook Pro. My laptop had totally crapped out on me and I needed a replacement fast. Combining the education discount (which I used a recently expired student ID for), a free printer and iPod, and tax-free weekend in Massachusetts meant I could buy a brand new MBP for around $900. Nothing really could compete with this– the price was right, the battery life, weight, size, and power were all right and I couldn’t even reach parody with another PC. So I purchased my Macbook Pro.

I was very happy with that laptop. A great keyboard is a must, and the Macbook Pro was the best I used other than my old Thinkpad T43. In addition to the keyboard, I also got a trackpad that was far superior to any I had used before that actually allowed me to ditch the mouse I normally carried with me everywhere I went. The battery life was a ridiculous 8hrs– my previous laptop got 2.5hrs at best and I used to think that was an accomplishment. Power wise I never ran into any hiccups. The stability was solid. OSX was easy to adopt to as a full-time OS and I was on my merry way.

Except one, tiny, problem.

I hate using a laptop if I don’t have to. Whenever I was home I plugged right into a much bigger screen, an external keyboard, and mouse and sat a desk to use my computer. Call me old-fashioned, but I have never been as comfortable working on a laptop as I am using a separate keyboard, mouse, and monitor. And laptop speakers? Don’t even get me started.

Now none of this would be a problem because I used my laptop on the go. At the time I purchased my machine, I was just coming off of five years of school, during which I constantly worked in libraries, coffee shops, friends houses, etc. I also had worked as a consultant at several places over the past year, so bringing my workstation with me on the go was a necessity. One month after I got my laptop, I was working a normal desk job but was assigned a desktop from the stone age that was barely functional. I found myself doing a significant amount of my work on my personal laptop that I brought with me from home. But about six months ago, my job purchased me a new desktop that was blazing fast. I work with confidential data virtually all day long, which was a huge hassle when I used an old machine. I often performed various data management activities with no more than one application open on my work computer, prepare the data in non-confidential format, and ship it off to my laptop for more in-depth analysis. The workflow was atrocious. Having a functional desktop made it pointless to bring my laptop to work– most of what I do couldn’t be done on a personal machine anyway. So while my workflow became much more efficient, my laptop lost utility. More and more I found myself simply leaving my laptop plugged in at my desk at home and operating it like a desktop. Fast forward to today and I fried my battery, which now holds only 3-4 hours of charge and I haven’t used my laptop as a portable computer in months.

I decided I should sell my laptop and replace it with a Mac Mini, which brings me to the title of this post. Perhaps the most pleasant experience I’ve had on any computer since I first used a Gateway 2000 c. 1992 came from using Apple’s Migration Assistant. Upon turning on my Mac Mini for the first time, the setup wizard offered  the opportunity to transfer files and settings from another computer. Now this is a feature that browsers and other software have offered for years and the experience has never been all that useful to me. But this time I decided to try it and I plugged my laptop into my Mac Mini using an ethernet cable. Approximate 2 hours later my Mac Mini restarted and the experience was breathtaking.

Everything, and I mean everything, transferred over to my new computer. All my applications were installed. All of my settings, including those made by software like Onyx and Geek Tool, transferred over. All my documents were where I left them. The experience was indistinguishable from logging onto my laptop.

This is an astonishingly great and useful feature. It seems so simple in theory, but execution can easily be botched. Apple hit a home run with Migration Assistant, at least as of the version that comes standard in Lion.

Ultimately, my experience with Migration Assistant, along with the great resale value on my Macbook Pro, has pretty much ensured that my next computer will be an Apple.

September 7, 2011

Tonight, I went to a public meeting run by Providence’s Department of Planning and Development. A few of upfront observations:

  1. The folks who work for the Department of Planning and Development (DPD) were professional, kind, and capable, as was the consultants who worked with them. They maintained decorum and a genuine sense of openness even though there was a clear tension in the room between some outspoken (and knowledgeable) community members and activists who had clearly participated in many past meetings.
  2. The ideas presented for the Knowledge District demonstrated thoughtful, albeit top-down, considerations for the space that showed a remarkable respect for the complexity, size, and importance of the project. I also felt that the DPD and their consultants got all the big ideas right and that they were quite familiar with the community they were re-imagining.
  3. Despite some great big ideas, there are really important details that are worth memorializing that the DPD is missing. In part, maybe this is because they’re simply not at that deep a stage. I’m hopeful that the purpose of the public meetings was not just to get feedback on the big concepts (which seem largely unassailable), but to make sure they get the details right from the people who live and interact with these neighborhoods daily.
  4. There are some clear areas of consensus among the engaged community, both positive and negative. At times, this consensus suggest that the state, city, and institutions (business, education, health, etc) have very different goals than the community. ∫ Here’s my summary of what the community had to say:

Folks want to get rid of surface parking lots and move towards more parking structures. It’s very clear that everyone feels these surface lots contribute to the desolate feel of the Jewelry District and hospital area, as well as acting as an extra barrier (beyond the highways) to connecting the Knowledge District to the rest of the city. People want to see mixed-use, with real action on the street level and residences and offices on higher floors. Everyone wants better sight lines to draw people into the Jewelry district and wants to see green space embraced, not solely through a park on the water but intimately placed directly on the streets as trees and other landscaping. People cited Chestnut and Richmond Street  several times as strong streets that acted as the main arteries by which Downcity and the Jewelry District connect. It seems that there is broad agreement that there a reason needs to exist to draw pedestrians into this neighborhood from Downcity, Fox Point, and Upper South Providence if we’re going to have a vital, 24/7 community. There was also pretty broad acceptance of higher buildings being constructed along I-95 and keeping large footprint buildings out of the Jewelry District.

Most of the more negative tone of the evening came from two core issues– the need for more residential development, something that’s not seen as high priority or even on the radar of most public officials, and funding. I’ll start with funding. There are serious concerns that even if the plans are perfect and great, a total lack of municipal, state, and federal funding for the foreseeable future places major risks on central aspects of the DPD’s plans. How can we fund a large, sweeping greenway and inviting, beautiful streets? How can we fund restoring vital roadways absorbed by bad planning in the past? How can we upkeep the parks people crave or build vital family institutions like schools without any public funding? How will we repurpose landmark buildings like the Dynamo House that has sat vacant even though it’s so full of potential? There is a serious sense that all the projects that serve as good models for the Knowledge District required considerable public infrastructure investment that’s just not available now. Sadly, to truly “fix” the Knowledge District will require not just one large project, but several major improvements. I saw little optimism for the proposed Streetcar/light rail system that RIPTA is championing. Some of that was disbelief that it could ever be funded, and some of that was disbelief the streetcar was a solution to a real problem. There’s also little optimism that the proposed pedestrian bridge to connect Fox Point to the Jewelry District is going to happen. Everyone begrudged that DPD had little real authority to make anything happen. Zoning is an absolute disgrace in Providence, particularly this area. But even substantially improving the zoning and regulations around development won’t actually make sure the projects are mindful of the projects wider goals. More to the point, it’s still unclear how Providence can simply use the name “Knowledge District” to bring in the kind of development needed. There is serious consternation that the entire “Knowledge District” concept is selling something that doesn’t exist and won’t have the infrastructure to attract folks. Without the promise of big public infrastructure improvements, developers are going to play the “wait and see” game.

Residential development is another important aspect of what the community members craved. It was immediately clear that the desire of folks at this meeting was to have a 24/7 neighborhood with mixed-uses, including several calls (from residents, no less) to include low-income and affordable housing. There is strong dislike of the name “Knowledge District”, especially if it supplants the Jewelry District (which is a subset of the area in question). No one feels that this name captures what they want to use the space for, and no one felt the name had any real meaning. Folks believed that “Knowledge District” was as empty marketing that would have no real longterm staying power. But really, this goes beyond a bad name. The members of the community who met with DPD tonight clearly believe that residential development has taken a major backseat to institutional expansion and large business development. My interpretation was that the community felt that lawmakers were simply hoping that big groups like the hospitals, Brown University, Johnson and Wales, and some yet-to-be-named mid-sized businesses would snap up parcels and build massive buildings that would fill with jobs. First, they believed this vision was largely a fiction (again, see the hesitation folks had that businesses of any remarkable size would take on the development costs and move to Providence without the public infrastructure investments). Second, these kinds of buildings were not what the community envisioned, particularly for the Jewelry District area that already has much smaller plot sizes and lower building heights. What I heard, rather clearly, was that the sea of parking lots around the hospital and the area with raised highway along I-95 was fair game for the behemoth buildings most lawmakers are picturing for the entire project. But the interior of the Knowledge District has to be filled in with a place that people want to live, play and work in (in that order).

I hope to write some more on my thoughts on developing this area in the future. I generally agree with the comments from the community above, but I have a bit more optimism and a bit more faith. Overall, I was really happy with how DPD is framing this project. They are very consciously thinking about distinct portions of the Knowledge District and respecting their differences while simultaneously ensuring cohesion and setting strong, wider goals.

I just hope we get a damn grocery store and dramatically cut down on surface lots. More on that later.