Blogs

Jo Caird: Reading the Stars at the Fringe

At
the end of July, just before setting off for Edinburgh, I did a
little statistical analysis of the shows I reviewed at last year’s Fringe. In 2010, as this year, I was reviewing for another
publication alongside Whatsonstage.com and out of the 43 shows I
reviewed for both publications over the course of the festival, I filed one five-star
review, 16 four-star reviews, 11 three-star reviews, nine two-star
reviews and two one-star reviews.

This
year, out of the 21 shows I saw (please note that it’s not that I was
twice as lazy, but that I was in Edinburgh for half as long) I gave
two of them five stars, five of them four stars, 10 of them three
stars, three of them two stars, and one of them one star.

In
percentage terms (rounded to the nearest whole number*), this means
that while last year five-star shows accounted for 2% of my total,
this year they accounted for 10%; four-star shows accounted for 37%
of the total in 2010, down to 24% this year; 26% of the shows I
reviewed last year received three stars, compared to 48% this year; 21% of
shows last year earned a two-star rating, down to 14% this year; and
both this year and last, I gave one-star reviews to 5% of the shows I
saw.

It
would appear from these figures that in general I saw higher quality
work this year than last, with significantly more shows receiving
five and three-star ratings and significantly fewer shows receiving
two star ratings this year. The trouble with drawing such neat
conclusions, of course, is that star ratings only tell part of the
story and are an inexact science, particularly in an intense critical
atmosphere like the Fringe.

It’s fairly common at the Fringe to hear artists bemoaning a ‘three-star review that
reads like a four’ or any of the other possible variations on that
statement, disappointed that the star rating their work has been
given and the criticism it has received are somehow mismatched.
Perhaps unexpectedly, this is almost as frustrating from a critical
point of view: star ratings may be beloved of PRs and marketeers
seeking an eye-catching way to make their clients’ shows stand-out
from the crowd, but any reviewer who truly cares about theatre must
balk at a such a reductive system. There’s nothing
like writing a considered analysis of a piece of work to a very tight
word limit (yet another frustration of reviewing at the Fringe) only
for it to be skim-read purely for its star-rating.

The
added difficulty is that although in an ideal world critics should be
judging every show they see on its individual merits, in the harsh
reality of the Fringe, where you’re seeing and reviewing multiple
shows every day for days on end, this is easier said than done. It’s
unlikely that a student production with a tiny budget and a cast of
inexperienced young actors will compare favourably with a show by a
well-reviewed, Arts Council-funded organisation playing the Fringe as
the jumping off point for a costly national tour – this much is
obvious. Less obvious though is how to avoid letting your impression
of that slick, professional show affect your take on the ramshackle
but promising piece of work that you find yourself watching less than an hour later. It’s a constant battle, and one that I
fear I don’t always win, despite my best efforts.

All that said, thought, the question remains: did I see better work this year than in 2010? I’d have to say ‘yes’, but it doesn’t actually matter in the slightest: it’s not like we can extrapolate anything about the standard of work at the Fringe from such a tiny sample. The only conclusion I do feel it’s fair to draw from this data is that over time, I’ve become marginally more skilled at selecting shows to review. If I continue at this rate, by Fringe 2020, I’ll be giving almost everything I see five stars. Bring it on.

*Please excuse any errors in my figures. It’s been a long day.