Image for post
Image for post

Inciting Racial Hatred

Channel 4 & Again, Why Marketing Analysis Must Die!

So, Who Cares?

There is an assumption that bad methodology never hurt anyone. Unfortunately, this couldn’t be further from the truth. The reasoning for Eugenics during the holocaust and reign of the third reich was due to bullsh*t ideology turned into ‘fact’ through the use of bullsh*t science and methodology, which was then used to confirm that abhorrent stance of horror. Talk to enough ‘experts’ and you will prove anything you want to prove, though whether the integrity and competence of the experts remains in tact and consistent is dubious at best.

In this case, channel 4’s foray into opening this sensationalist can of worms is horrific! I say this as someone who’s generally supportive of Channel 4’s position on alternative narratives and challenging broadcasting. Indeed, I happen to like it. I don’t mind being challenged, but one thing you have to do is make it credible. It’s like trying to have a conversation about our heliocentric solar system with someone else who believes the universe revolves around them.

In this era of Islamophobia, one of the worst things you can do is put out misinformation. ICM as a market research organisation commissioned by channel 4 are in a position to influence viewers in a vulnerable climate of fear. On the whole, cohorts of people are more receptive to information if it confirms their irrational feelings than if it is true. It’s why ‘gut feel’ is often cited as a good thing to have, despite the numbers which always show otherwise. Hence, adding to that fearful position by publishing misinformation plays into the feelings and hands of irrational fear-mongering, lends a false credibility to bigotry and fascism and is irresponsible at best.

Methodology Flaws

Some of them have been covered by other commentators and indeed, have appeared in the Huffington Post. So I won’t need to re-present those arguments. However, I intend to highlight some of the methodological flaws in a bit more detail. I will try to keep it accessible and use more graphs than math, as they’re easier to digest and, being selfish, faster to write. Feel free to get in touch if you want me to show this by calculation (e.g. tests for independence, skew etc.)

Non-representative Sampling

This has been touched on before. However, it’s important to highlight this as there are some crucial assumptions that need answering. What si clear though. This research fails to account for a statistical phenomenon known as Simpson’s Paradox.

Let’s illustrate Simpson’s Paradox by way of example. It is wildly accepted that most internet access happens through mobile devices (71% v 29%). This is fine as an average. If there wasn’t a correlation of internet usage with age (i.e. it wasn’t a factor) we’d see roughly this same value appear in all age groups. However, we don’t. Older people still predominantly use desktops to access the web.

This is an example of how the same question asked to two different groups yield two ‘correct’ but contradictory results. This can also happen if questions were phrased differently to two different groups, effectively contaminating the research by creating what is known as a covariate in the experiments needing to be accounted for. Covariates are real or potential alternative ‘confonding’ ways to get to the end result, other than through the hypothesis under test. To use a classic example, if you go outside and you get wet, is “‘it rained on you” the only way that could happen? No. You can fall in a puddle, a puddle is splashed on you, you walk through a sprinkler system unawares etc. this comes out in correlations, (or known causations in a small enough set of finite possibilities) but even then, you have to be careful.

Image for post
Image for post
what tech would we miss if it wasn’t there (younger v older)?

ICM’s research appears to introduce covariates into their process in a few different ways.

It allegedly did not ask the same questions to the control group. If someone isn’t a muslim, why ask them questions directly aimed at British Muslims? Hence, that alone discredits any results emanating from the research because they are effectively two different surveys assessing two different things. The number of covariates when you do not account for any just goes up massively, increasing the required threshold of significance, therefore increasing the number of surveys and samples you have to take to account for the result against chance.

So what did this look like? Do we have a representative sample? The answer is clearly ‘No’. Indeed, LSOA selected didn’t have a consistent distribution even among themselves.

The difference in shape of the following two graph lines indicate that some area had significantly higher numbers of muslims than the general population. The idea is that if the density of muslims is the same throughout the Lower Super Output Areas, the graphs on the primary and secondary axes would be the same ‘trajectory’’, but they are obviously not.

That clustering needs consideration, even though the correlation is 95% (don’t be fooled by that value — that you would expect from sampling the same groups for general control and experimental).

Image for post
Image for post

Lower Super Output Areas

Super Output Area’s are general statistical clustering convenient for government statistical research. The Lower Super Output Areas (roughly 1,500 residents and 650 households) form an atomic enough unit to make policy decisions at, in all further aggregated levels of government from Parish Councils and Local Authorities, through County Council’s and Central government.

Selecting LSOA which had a greater than 20% Muslim population automatically means that 300 of every 1,500 people in any one group would be a Muslim.

Let’s put that in context. Rossendale, containing all the old mill towns, has an estimated population of 65, 700 to the nearest 100. 20.7% or 13,600 are Muslim. That region has a number towns within in, where such clustering happens. 2011 census data showed that Rawtenstall for example, with a population of 22,000 of the 65,700 has a 0.2% openly Muslim population. That is 44 folk. OK, so they’re not there!

Indeed, this trend in the ICM data is disturbingly present across all aggregates. To quote Lancashire County Council:

For the Lancashire 14-authority area, there is a higher percentage of people from a white ethnic background (90.9%) than is the average for England (87.5%) and the great bulk of these are “White British” (Table 1). Aside from these, the most significant ethnic group is “Asian or Asian British” who comprise 6.5% of the resident population, a proportion marginally above the national average. All other broad ethnic groups have a lesser representation in Lancashire than nationally.

At district level, five of the Lancashire local authorities record small proportions of black or minority ethnic (BME) populations, representing less than 5% of their total resident population

So where are these 20.7% of muslims? They are not across county level, they are not across local authority level. Hence, as the name suggests, the LSOA ICM picked were so small as to be insignificant on the UK or even county stage. After all, 1,500 people can fit into a single council estate or even a single high rise building. It seems to me like the interviewers targeted areas which they knew had large scale ethnic populations.

Furthermore, Manchester has a population of 523,727 (2015 figures). As can be seen in aggregate in the ICM data tables (not as transparent as they claim, since it’s not the atomic dataset) Manchester contains 4 main LSOA with a Muslim population greater than 20%. We don’t know what they are or the population in each. Manchester has in total, 34,378 LSOA’s. Out of that lot, they picked 4. That’s 0.01% percent of the total population. This becomes important in a bit. Bombshell coming up.

A Note on Culture

The UK jobless rate is currently 5.1% of the population. About the same as the number of Muslims in UK society on average (4.8%). Look around your neighbourhood on your way to or from work. Would you say that 5 out of every 100 people your area are unemployed? Some will say yes (and more) and some will say no.

Now, think about what your area would be like if 20 out of every 100 people were unemployed. For some, you might claim it may quickly turn into a waiting list to get on the Jeremy Kyle show, but concentrate on the important aspect here. The culture of the area you live in will change (using the dictionary definition. i.e. how things work, what people do with their time, the number of people walking around, the number of cars in driveways or on the street during the day, what people’s outlooks are, the relationship they have with things and others etc. which may be positive or negative but is crucially not about nationality or race).

If a community of any type has no need to leave it’s local area, why would it? Hence, if a particular area is in the main of one demographic, with no need to leave it, then they will obviously respond to the survey questions the way they did. What about those Muslims in London who integrate exceptionally well? They were excluded altogether from the survey, in turn inflating the negative figures.

Regardless of culture, the UK, like anywhere else in the world, has problems with social mobility. Picking on towns and cities which have 20% or more of the population as Muslims (just like 20% unemployed), when 4.8% of the general population is Muslim, means concentrated clusters exist that are outside the norm (i.e. communities exist). In turn almost always meaning such communities have shops and enterprises and community group activities not requiring anyone to leave the area, just like ex-pats in Spain or Dubai. Hence, the survey’s findings about socialising outside work with those of a non-muslim background are just the result of normal activities, since they have no need to socialise that way. Do you go around making friends with randoms just to satisfy a statistic? I think not.

What this has in common with Manchester’s Tech Scene?

A few months ago, during StartUp EU Week, I wrote a disgruntled article after Nesta research embarrassingly missed out Manchester, the second biggest tech hub in the UK outside London. They highlighted Oxford, which is small-fry. The research suffered from the same problems this research did. In that they selected 35 cities definitely including capitals, but perhaps a few others, then borked on their own research method. You can read what was wrong with it below, but that same sampling and experimental error is present here too.

1% v 4%

The alarming statistic for me is sympathy with suicide bombers. Not just because such sympathies exist, but crucially because Trevor Phillips dared to draw a parallel with statistically insignificant data.

In order to grasp the problem here, we have to go back to the figure of 0.01% of interviewees. That is so small as to make errors in the method exceptionally damaging relative to that result, in turn necessitating a very high threshold (or very very small p-value)for it to be abnormally different. Now here is where I state the obvious, but it is so obfuscated, it may not even be evident to those with a fairly reasonable level of statistical literacy. The 1% of a general population and the 4% of a 20% sub-population also included having a 4% sympathy rate, are the same value.

Let me make the numbers easier to digest for illustration. 100 people, 20 of whom have a 4% sympathy rate of that 20. Multiply 20 * 0.04 (i.e. 4%) and you get 0.8% of the total population. Round that up, what do you have? 1%. Brilliant! With the population so high as to skew the results the way it does, the number returned has to be significantly higher in the 20 people, or significantly lower in the general population for this result to be significant as a test against the general population. Indeed, if you look at the variation in the graph I previous showed, that variation is so high as to not allow you to explain away the 0.8% to 1.0% (i.e. the 1% v 4%) difference using anything but random fluctuations in the ethnicity rate from town to town and as you can see from ICM’s own data, the fluctuations are significantly bigger than that. Sometimes int order of 100 times.

Plus, they didn’t measure Islam anyway, they asked people with Asian sounding names, in itself potentially racist but from a method perspective, has a high probability that you get someone who isn’t a Muslim with a Muslim sounding name or you miss someone who has an anglicised name but is a Muslim. False positives and negatives respectively. ICM are idiots!

Conclusion

Choosing areas of 20% when the average in the general populous is 4.8% is a massive difference in datasets. It is clear from that alone the research is not representative of British Muslim thinking as a whole, as it didn’t include any well integrated Muslims. You cannot make that extrapolation however right your methodology. All it proves is attitudes and thoughts around the tiny geographical areas sampled. That’s the total extent of credible research, let alone incredible ones like ICM’s.

As for the claims of money, if this would not have been possible within the budget, as a matter of integrity, I would have turned the work down or segmented along smaller, representative lines rather than conduct research like yours and put out an incredible result! In such a topic, it’s morally bankrupt! It’s something I and indeed, any research organisation worth their salt would never dream of doing. It’s only marketing companies that take this sort of thing on. Do not create a horrific piece of research with the potential to be used in socially negative, politically [in]sensitive and indeed probably in abusive justification.

In case it isn’t obvious, my view is now very definitely that the field of marketing analysis needs an overhaul. It’s a mess! If you are a marketer, stay as a marketer. Don’t do public policy and also understand the implications of what you’re taking on.

To ICM, I know you want to raise your profile and get traffic to your website or social media and you’ll have got it after yesterday. Just be clear, the one thing you do not understand is statistics.

Finally, as an atheist (yes, one with a Muslim sounding name), I do not have anything to gain from writing this. I don’t like what I saw because it is a load of sh*t, not because I set out to find rubbish in ICM simply because they provided me with something I didn’t like. Using the justification that someone will find something if they don’t like you isn’t valid if the stuff you’re putting out is rubbish. The fact such marketing doesn’t go through scrutiny as much as it should is a failure of both those other organisations not receiving scrutiny to adequately sell and also ourselves as a society to adequately review it. ICM, you work in a field which produces garbage all the time. That’s fine as long as you stay there. Moving into fields where we need less garbage, not more, is not the right thing to do. So you should not be surprised when people who have the skills you do not, take your ‘research’ apart and perhaps tear you a new one.

ICM, get the F**k off the Internet! Channel 4, you were sold a dummy!

Ethar distills and communicates information for companies & organisations. Helping them become leaner and more agile. Axelisys specialise in providing innovative agile enterprise advice and systems to blue-chips and SMEs alike.

Image courtesy of http://kdhdigitalservices.com/

Written by

EA, Stats, Math & Code into a fizz of a biz or two. Founder: Automedi & Axelisys. Proud Manc. Citizen of the World. I’ve been busy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store