Academic articles : why are so many such shite?

Alan's musings : home

Alan's musings ... some personal views and rants

on this digital marketing malarkey


Academic articles*: why are so many such shite?


*I'll start with a caveat: I only read academic articles in business-related fields, and so this refers only to these. I have no idea if academic articles in other subjects are shite or not - though my research prior to writing this suggests that other folk in other fields have a similar opinion to me.

And so ... I've never been a big fan of academic research in the field of digital [or whatever it may have been called in the past] marketing.

It probably comes from having worked outside academia where I put models, concepts and theories into practice before I even knew those models, concepts and theories existed. And therein lays my bias. As far as marketing - and much of business, for that matter - is concerned, the practice came first ... then someone wrote it up as a model, concept or theory - and they [probably] sought kudos for coming up with it.

But hold on ... theories are just that aren't they? Theory. As in: does not exist in reality. Which is fine if we are talking about science. But is marketing a science or theory?

Within academia - practitioners don't really care that much - there is a constant debate over whether marketing is an art or a science. In recent years the science argument has gained strength on the back of digital applications - computers, programs, and algorithms are all science aren't they? My feet, however, are firmly planted in the art camp. Science can help marketers make decisions, but science cannot make those decisions.

Scientific study is frequently based on the assumption that there is a specific answer to every question, generally known as positivism. This is fine if all the variables or units remain the same. For example, if you add together the same amount of substance X to substance Y in a controlled environment at a set temperature there will be a fixed result. That result is the same now as when it was first conducted - and will be in the future. And it will be same result if the experiment is conducted in kathmandu or Cleethorpes.

In marketing, however, we can't even agree on what our variables and units are [add some advertising to some sales?] let alone find a controlled environment. The environment in which marketing is practiced - and researched - is made up of human beings. Humans are pesky critters who have a tendency to be different. They have differing thoughts and opinions based on individual experiences. And they are different if they are from kathmandu or Cleethorpes.

Marketing research - I believe - can at best be interpretivist, where any theory applies only in the time, place and environment in which the experiment takes place. Marketing cannot have any laws or bodies of theory that are, as in science, universal. Ergo, marketing is not a science. For a snapshot into how widely this issue is debated amongst marketers, take a look at 
Ritson versus Sharp: Who won the clash of the marketing titans?

So why this wander into the prickly terrain of science versus art?

Much of the following content is paraphrased or verbatim from the preface of the third edition of Digital Marketing - a Practical Approach [yes readers, if you don't reference it, you can be guilty of plagiarising your own work].

The practical nature of the content [of the book] means that there are also significant practical underpinnings - that is, there are also references to the work of practitioners who have proved themselves at the coalface of digital marketing. Furthermore, data science has - in my opinion - negated the value of some academic research. For example; I read one article on online advertising that '... applied a vector autoregressive models analysis to investigate ... ' [confession: I had to look up what vector autoregressive is]. The findings of the research were pretty accurate. I - and others - knew they were accurate because Google advertising's analytics tell us the same as the findings - but in real-time data, not an academic paper.

Academic research in the subject area is out-dated. Not only does the process of researching and publishing academic articles work against contemporary findings - an article published in 2017 may have no references that post date 2015 [or earlier] as that is when the research was conducted, but whilst some findings pass the test of time, many conclusions do not. For example, any comments with regard to social media marketing made in 2015 are not necessarily true for Internet users now. Similarly, online advertising has changed so significantly in the last two years that any research into its effectiveness that pre-dates, well ... now, is useless for anything other than history.


Also ... some of the academic research in the subject area is of dubious quality. A continuation from the previous comment is that later work often relies on the findings of earlier research without question, so making subsequent conclusions potentially flawed. In particular, meta-analysis [on academic articles] is popular in this field - I have yet to read one that questions the research rather than accepting the findings as presented. Also, a surprising amount of the research is conducted only on university campuses, with respondents being either [a] academics, or [b] students. Similarly, many requests to complete questionnaires are posted online - usually on social media. Whilst this might be acceptable for some targeted research, when investigating Internet use these samples are not reasonable representations of the population [social media users tend to stick together in 'bubbles' - the sample is likely to be made up of a similar 'bubble' to the researcher who posts it].

However - and I am not sure whether this is a compliment or criticism, but it seems most academic articles on digital marketing include in them somewhere a phrase something like:
There is still a significant gap in our understanding/research of the subject area.

I also find that the results of a great deal of academic research actually tell us nothing new. Or rather, tell practitioners nothing they have not already discovered by trial and error [see example #1].

Also with regard to academic research, I find there is confusion in the crossover between computing, business and other subject areas - with examples of discipline experts making basic errors when they stray from their own field. This includes marketers making technical statements that are flawed as well as IT writers who - without the qualification or experience in the subject - making erroneous comments about business applications, or of specific relevance to this book, marketing applications [see examples #1 and #2].

Another significant flaw within academic research in the field is that it relies on previous academic research to maintain its validity. For example, in an article by Lowry et al [2014] the question of how quickly visitors judge a website when they arrive on it. Lowry - naturally - relies on other academics to support his assertion, saying that:
 
'Research suggests that 80% of web surfers spend just a few seconds viewing a site  before continuing to the next      site [Peracchio & Luna, 2006]. Moreover, most web  users are unlikely to look past the first few pages of a website      (Thompson, 2004)'.
And yet the likes of Amazon knew these things in 1994. That would be 8 or 10 years before academics
proved it. I knew them from reading the log files of websites in 1996 - as did thousands of other website publishers. Furthermore, we knew them absolutely, with no margin of error in research bias. So long as you know how to interpret them, computer-generated website analytics have no research bias. They tell you exactly how long visitors stay on your website and how deep into it they go. [update; it is now the case that Google analytics are not now as accurate as they were once believed to be] But hey ... these facts are not in a peer-reviewed academic journal, so they are not trustworthy. Furthermore, to conduct their research, Lowry et al used mocked-up websites to test their hypotheses. In real-life research, there is no mock site - they are real. Lowry et al also concentrate on the impact of logos on perceived credibility of the site - which is fine, but the logo is only part of the perception. In real life - using real-time multivariate testing - you can check all aspects of the page that might influence a visitor's perception of the site's 'credibility'.

A related scenario to the 'it doesn't count as legitimate until it's in an academic journal' is this. I recently read a half decent journal article called 'Challenges and solutions for marketing in a digital era' [Google it if you want to read it] - at least, it was half decent in 2013 when it was published. Now, as I write [in 2019] it is useful only as background reading. Anyhoo, in that article was the quote; 'As a consequence, brand managers no longer control the messaging they use to create brand strategies' which was referenced to three articles, one published in 2007 and two in 2012. But here's the thing: I have a record of me stating at a [non-academic] conference in 2001 that ' ... one of the biggest impacts of the Internet on marketing is that the marketer no longer has control over their marketing message'. Now, I'm not saying that [a] I was the only one saying it, and [b] it's unlikely that I dreamed it up myself - but isn't it strange that in an academic article the notion doesn't count if it comes from practice? It's only 'academic' when an academic writes the same thing in a published article. I wonder where Deighton, Fader and Moe & Schweider picked up the idea seven and twelve years [respectively] later? I can't be bothered to seek out their articles to see if they attribute the notion to me - or someone like me who is, perhaps, more deserving. Oh, I should add that my 'quote' is in the first book I published in 2007 and in several after that. But hey, books - even academic text books - do not count as proper research in the same way as academic articles do they?


A final flaw I encounter in academic research is partially related to reliance on previous academic research, but is one that I can only describe as ignorance of the real world. For example, I have read published work that:

  • Assessed the quality of a website by a series of evaluations, one of which was the site's search engine optimization. However, the author used a totally inappropriate search term for that assessment (Google informs ad buyers on common search terms in all industries). The result - in real terms - useless findings.
  • Analysed the value to the organization of paid search advertising versus offline advertising - but the researchers ignored (didn't know?) that the highest bid does not automatically give a search ad the highest listing and so the findings were so flawed as to be useless.
  • A paper titled 'Popularity of Brand Posts on Brand Fan Pages: An Investigation of the Effects of Social Media Marketing'. The paper was published in 2012. Facebook stopped 'brand fan pages' in 2010.


What I find most frustrating about issues such as these is that those people who work in digital marketing know the things that some academics seem to be ignorant of. And that is one of - if not the - key reason I will not reference such articles in any of my books. Those books' primary objective is to help students understand digital marketing to an extent where they can find employment in the field. To direct them to flawed research as part of that learning curve will not only prevent them meeting that objective, it will hinder their progress.

Academic research, we are told, is used to test practical concepts. In some disciplines - predominantly scientific - this is perfectly valid. But where human behaviour is concerned there will always be inaccuracies in the responses from participants (yes, I know this is built into research analysis) but with computer-generated website analytics the data is absolute. There is no need for academic research to validate it. For example: all other things being equal; if real-time multivariate testing of hundreds/thousands/millions of visitors to a web page shows that they stay longer on that page if it predominantly blue rather than green - then blue works best. Asking people if they prefer blue or green in a controlled environment can never give the same degree of certainty. And yet we are meant to value that academic research more than data pulled from real-life events. I fundamentally disagree with this notion - and this is reflected in my books. For an insight into the kind of real user-based experiments conducted by the major websites, find yourself a comfortable chair, serve up a drink of your choice [tea/coffee/beer/gin] and pass some time reading 
Seven Rules of Thumb for Web Site Experimenters. Experience meant that when it came out in 2014 I was already aware of most of the tales included in it [I'd used some in my teaching and books - and still do], but if you are new to all things 'digital' - i.e. less than 20 years - this paper [it is not an academic paper] is essential reading.

It is also the case that I'm not a lone voice in questioning the validity of some academic papers. As he has an impressive list of publications, perhaps Dennis Tourish [a professor of leadership and organisation studies] would not totally agree with my title for this page, but his comments in 
Do business schools still have brand value? suggest he might at least see where my point of view comes from. He says:

On the few occasions that a delightful piece of writing catches the eye, it feels as rare as the sight of a pink unicorn unicycling across a campus quadrangle. The view seems to have taken hold that serious work must be painful to read, and almost impossible to understand. There seem to be five golden rules for academic writing in management studies these days.
First, do not write about genuinely important issues, since this might reveal that you really have nothing worthwhile to say.
Second, never use a short word where a long one will do; this prevents anyone understanding what you mean, further insuring you against criticism.
Third, never use one word when you can stretch to four; this wears your readers out, and bores them to boot.
Fourth, fresh metaphors, humour and irony wake people up, and are therefore your enemy. They should be shot on sight.
Fifth, bamboozle people with jargon, and plenty of well-known names. This further paralyses their critical senses: if Bourdieu or Heidegger said it, then it must be right. Right?
But you get bonus points if you can find a French philosopher that no one has ever heard of: the deader the better.
It often feels like this kind of work has been written by a computer rather than a person. Come to think of it, this might not be so far from the truth when you consider how much of quantitative papers consists of tables auto-generated by SPSS software, and how many "critical" papers seem to just cut and paste obligatory sets of references.
We need to call time on this kind of nonsense. Those who write like this have one primary goal: building their careers, via publishing papers.


I think that I break all of Professor Tourish's 'five golden rules' in my books – and even more so on this website – and so his comments support my assertion that I could never write an academic article. And I say that with a certain degree of pride. A footnote to Prof Tourish's piece is that I doff my cap to anyone who uses the word
deader. Kudos Prof Tourish.

A long, long way from my subject area, 
The Lancet has made one of the biggest retractions in modern history. How could this happen? is included here as it offers a critique of the system of academic publishing - including peer review, one of the issues that I see as being massively flawed.

My scepticism toward academic research is not, however, absolute. Of course there are papers out there which challenge conventional thinking and so inspire marketers to re-consider practices. One which springs to my mind is
A New Marketing Paradigm for Electronic Commerce by Donna Hoffman and Thomas Novak. Published in 1996 - and so written at least a year earlier - this paper predicts [almost] exactly what impact the Internet has had on digital marketing in the years since that time. It's available online - take a look and see what you think.

Furthermore, it is articles in academic journals that bother me most. There is some very good research made by academics out there that is conducted as just that: research. The best challenge the findings of other research or models/concepts that have simply become accepted with no real question.

As a footnote, however, it has to be said that my scepticism towards research also extends to that I refer to as practitioner or commercial research [I use many references to such in my books]. Independent bodies such as Nielsen deliver impartial data and analysis - but others have an in-built bias. An organization that sells software for use in marketing on Facebook will always present research into user's activity on the platform with a positive slant, for example.

My final comment on academic research is that before it is published it goes through the process of peer review. That is 'experts' in the subject area review the content for flaws. Hmmmmmm ... I'll let you judge for yourself how well this system works. And you don't really want to get me started on the
game that is getting published in academic journals :-).

Examples of what is, in my opinion, shite research
#1 ... The Impact of Online User Reviews on Camera Sales.
#2 ... The effects of blogger recommendations on customers' online shopping intentions.
#3 ... An investigation of global versus local online branding.
#4 ... Right research ... wrong researcher.

Example 1

Zhang et al. (2013) The Impact of Online User Reviews on Camera Sales. European Journal of Marketing

The abstract of this paper includes the following:

  1. Practical implications. This research indicates that the retailers should provide channels for and encourage customer online reviews for search goods to improve sales. It is also beneficial for online retailers to provide detailed product attributes to help their customers make the purchase decision. Carefully designed and executed price promotions could also be effective ways to improve sales of searchable goods.
  2. Originality/value. This study is one of the first attempts to investigate the impact of online user reviews on sales of search goods.

Now, I do not doubt or question the integrity of this article's authors [or, indeed, that of any academic researcher], but - in my non-academic-research opinion - Amazon and a thousand other online retailers knew the first element of the practical implications back in the last century [I certainly did] and, by definition, a search good is a product that is easily appraised before purchase and so is subject to price competition - and so nothing new there.

As for the originality/value, Amazon - and its contemporaries - will have been, and are still, running real-time research on the impact of online user reviews on sales of search goods, again since the last century. This might have been one of the first academic studies of its kind [I have often come across references to an article by Godes and Mayzlin published in 2004 as 'the first researchers to investigate the impact of the online review'] - but it does not tell us practitioners anything we hadn't known for nigh on ten years.


-----------------------------------------


Example 3

Murphy, J. and Scharl, A. (2007) An investigation of global versus local online branding. International Marketing Review, Vol. 24 Issue: 3, pp.297-312

There are folks who refer to me as being an expert in digital marketing. I disagree. However, I do know a bit about the subject of this academic article ... domain names. I also know quite a bit about search engine optimization.

This article carries academic merit. In terms of academic vigour, the article is sound. What I am against is the value of such papers in the real world - including:.

The research supposes that the organization gave the choice of the original domain name much thought. I was there. They didn't.

In common with most [all?] academic research, there is a lapse in time between research and publication. In this case the research is stated as taking place in 2003 - the publication being in 2007. Four years is a long time in all things digital.

With very few exceptions, the dot com suffix is accepted as being the one to use if your organization trades globally. That was the case in 1996 and it is still the case now.

Hypothesis #1 says that having a dot com gives the site a higher PageRank. It might have done ... but only as one of many variables that are not factored in this research. It also assumes that that having a high score on PageRank is a significant benefit in SEO.
It isn't.

Also on the subject of Pagerank, the article states that having no PageRank means near invisibility with Google.
No it doesn't.

Dot coms were the first domain names to be commonly available, so they have the longest 'history'. This is important as one of the 200 or so variables used in the Google algorithm is length of registration of the host domain name. So, all other things being equal, a website hosted on a domain name registered in 1994 will have a higher Google ranking than any other domain registered after that.

There is some confusion between PageRank and rankings in Google's search returns. The two are not the same.

There is an inference that the dot com is a better suffix to have for search engine ranking.
This is not the case.

No other variables used in the Google algorithm are considered in this research - therefore any conclusions related to Google listings are deeply flawed.

Hypothesis #2 is little more than coincidence based on other points raised here and factors external to the Internet. For example, the article states that: 'MNCs listing a dot com domain name have a higher Fortune ranking than MNCs with a local domain.' This is simply a statement of fact, and nothing to do with the domain name.

Hypothesis #3 takes no account of the potential technical reasons for using a dot com or ccTLD ... such as hosting in local countries and using directories on a single domain giving greater control and security are often the deciding criteria.

No consideration is given to the availability of ccTLDs in all the countries in which the organization trades. If you cannot register your domain name in the suffix of every country, then you may as well stick everything on the dot com.

Hypothesis #3 uses Hofstede's dimensions as a criteria. For all of the above, I would discount it as any guide as to why a domain name was registered. Even outside my opinion on the subject of domain names, Hofstede's original 'dimensions' are widely questioned when used in a contemporary environment.

And finally - and this really is a personal viewpoint - is the use of logistic regression testing in Hypothesis #3. The sight of any kind of mathematical formula sends shivers up my spine [I think it was an Act of God when I got a maths 'O' level] - but to use them to investigate why any given company uses a dot com domain name is, well ... shite.

So there you have it. An academic paper that in an academic environment has merit ... but as a document that might help an organization choose what domain name or names it should use in its global marketing strategy, it is not only worthless, it is shite.

Example 2

Hsu et al (2013) The effects of blogger recommendations on customers' online shopping intentions. Internet Research, Vol. 23 Issue: 1, pp.69-88

The stated purpose of the paper's research was:

'... to examine whether the blog reader's trusting belief in the blogger is significant in relation to the perceived usefulness of the blogger's recommendations; and how the blog reader's perceptions influence his/her attitude and purchasing behaviour online. The moderating effect of blogger's reputation on readers' purchasing intentions is also tested.'

In my opinion, that describes research of a psychological nature - though as I am a marketer I would say the subject is consumer behaviour. Full biographies of the three authors are not available with the paper, but their university departments are listed, them being; Computer Science and Information Management. Whilst I do appreciate there are academics who have dual specialisms - there is no indication that any of the authors have any qualifications or experience in marketing, let alone consumer psychology. So, before I had even read a sentence of the paper I had my doubts about its value, let alone validity in the real world.

Furthermore, as I do when marking students' dissertations, I started with a quick look at the reference list for the paper. Of around 80 references, fewer than a quarter were to marketing, psychology or even business-related journals, the majority being from computer science fields, including several related to the Technology Acceptance Model (TAM). My background of working with computer scientists within a digital environment means I am aware of this model. It is an IT concept that looks at how users accept technology and in particular considers the factors that influence their decision about how and when they will use that technology. Call me naive if you wish - but in my opinion anyone who is using the Internet to read blogs that may influence their online purchase behaviour has already not only accepted the technology of the Internet, but is comfortable with it.

So why would research into consumer behaviour even mention a model designed to evaluate a technology? By this point I would normally have stopped reading the paper as I felt it carried little or no validity to my practitioner outlook to the subject of digital marketing. However, I write books on this subject - and this paper looked to be a contender for an example of my view towards academic papers in my field of study. So I read on.

Sadly I could gather no enthusiasm to continue further after reading the hypotheses, which included:

  • H2a 'Trust will positively affect blog readers' perceived usefulness', and ...
  • H3 'Blog readers' attitudes toward shopping online will positively affect their intentions to shop online'.

My immediate thought was; do the answers to those questions really need researching?

Anyone who has ever worked in any kind of sales environment selling any product in any industry, market or environment will tell you that if someone trusts a person who is recommending a product then they are more likely to purchase that product. As for shopping online, isn't anyone who is psychologically in a position to trust an online blogger already making purchases online?

Bringing the subject more up to date, online retailers certainly knew the answer to these questions in around 1997. I certainly did. And I am not even going to mention the role bloggers played in the early Internet, except to say that they were - probably - the first Internet authors to be trusted by users.

Finally, I checked the sampling procedure for the primary research of the paper, which included placing a banner on one of the authors' Facebook page requesting the page's visitors complete the questionnaire. I'll leave a question hanging: is that a good example of a valid sample?


--------------------------------------


Example 4

Right research, wrong researcher

Not quite an example of shite research - but you will see where I'm coming from.

I came across
this advert for someone to undertake a PhD in the 'Effectiveness of advertising within the new media context'.


Let's start by ignoring the reference to, ahem, new media. I think they mean the Internet. I suppose nigh on 30 years is new when compared with, newspapers ... or Sandscript.

I'm more interested in the project description. Consider these sentences:

'Advertising, through maths and targeted media, plays a major role in the marketing of today's brands.'

'The considerable role of advertising within marketing has led to many academics and practitioners alike to study the effects of advertising on consumer attitudes and behaviour.'

Now, call me a bit naive, but from these I think the project is about marketing. For me, it's the repeated references to
advertising and consumers that does it.

Yes, there is the reference to 'maths and targeted media' - but advertising has always had an element of 'maths and targeted media'.

Furthermore, the description ends with ...

' ... to what extent the short and long-term effects of advertising depend on the creative quality of an advertisement and in effect metric used.'

Again, 'creative quality of an advertisement' has more than a whiff of marketing to it. Okay, I get that - despite its shoddy description - the project is about programmatic advertising. Now, that does include a bit of maths. There might even be some machine learning and artificial intelligence. But the successful candidate won't have to do much maths and/or AI - they will use tools that use them, but they will not have to develop any programs themselves.

And so to the 'job specification'.

We are looking for applicants with a Master of Science degree. Students with quantitative-oriented background, e.g. with an MSc in statistics, econometrics, data science, or marketing intelligence are invited to apply especially.

Wellllll ... research into programmatic advertising will - probably - require quantitative analysis. But then a lot of research is quantitative. However, research into 'creative quality' and consumer behaviour normally includes - or is solely -
qualitative research and analysis.

If we ignore 'marketing intelligence' tagged on at the end, the person who completes this PhD into a
marketing subject will be a statistician, an economist [who specialises in economic systems] or a data scientist.

And in four years time they will be a Doctor and get a job teaching digital marketing at some university or other - which introduces another grievance of mine ... 
non-marketers in digital marketing.

And the research? It will be published in around five years ... when programmatic advertising will still be exactly the same as it is now, and so the findings will still be perfectly valid. Yeah, right.

How to cite this article:
Charlesworth, A. (2018). Academic articles: why are so many such shite?. Retrieved [insert date] from AlanCharlesworth.com: https://www.alancharlesworth.com/academic-articles-why-are-so-many-such-shite



This page was first published in February 2018 ... but it has been updated and amended since then.
Some of the content is drawn from my books, other pages on my websites or my public rants on the subject.


Share by: