*I'll start with a caveat: I only read academic articles in business-related fields, and so this refers only to these. I have no idea if academic
articles in other subjects are shyte or not - though my research prior to writing this suggests that other folk in other fields have a similar
opinion to me.
And so ... I've never been a big fan of academic research in the field of digital* marketing [*or whatever it may have been called in the past].
It probably comes from having worked outside academia where I put models, concepts and theories into practice before I even knew those models, concepts and
theories existed. And therein lays my bias. As far as marketing - and much of business, for that matter - is concerned, the practice came first ... then someone wrote it up as a model, concept or theory.
But hold on ... theories are just that aren't they? Theory. As in: does not exist in reality. Which is fine if we are talking about science. But is marketing a science or theory?
Within academia - practitioners don't really care that much - there is a constant debate over whether marketing is an art or a science. In recent years the science argument has gained strength on the back of digital applications - computers, programs, and algorithms are all science aren't they? My feet, however, are firmly planted in the art camp. Science can help marketers make decisions, but science cannot make those decisions.
Scientific study is frequently based on the assumption that there is a specific answer to every question, generally known as positivism. This is fine if all the variables or units remain the same. For example, if you add together the same amount of substance X to substance Y in a controlled environment at a set temperature there will be a fixed result. That result is the same now as when it was first conducted - and will be in the future. And it will be same result if the experiment is conducted in kathmandu or Cleethorpes.
In marketing, however, we can't even agree on what our variables and units are (add some advertising to some sales?) let alone find a controlled environment. The environment in which marketing is practiced - and researched - is made up of human beings. Humans are pesky critters who have a tendency to be different. They have differing thoughts and opinions based on individual experiences. And they are different if they are from kathmandu or Cleethorpes.
Marketing research - I believe - can at best be interpretivist, where any theory applies only in the time, place and environment in which the experiment takes place. Marketing cannot have any laws or bodies of theory that are, as in science, universal. Ergo, marketing is not a science.
For a snapshot into how widely this issue is debated amongst marketers, take a look at
Ritson versus Sharp: Who won the clash of the marketing titans?
So why this wander into the prickly terrain of science versus art?
Much of the following content is paraphrased or verbatim from the preface of the third edition of Digital Marketing - a Practical Approach [yes readers, if you don't reference it, you can be guilty of plagiarising your own work].
The practical nature of the content [of the book] means that there are also significant practical underpinnings - that is, there are also references to the work of practitioners who have proved themselves at the coalface of digital marketing. Furthermore, data science has - in my opinion - negated the value of some academic research. For example; I read one article on online advertising that '... applied a vector autoregressive models analysis to investigate ... ' [confession: I had to look up what vector autoregressive is]. The findings of the research were pretty accurate. I - and others - knew they were accurate because Google's AdWords/AdSense analytics tell us the same as the findings - but in real-time data, not an academic paper.
Academic research in the subject area is out-dated. Not only does the process of researching and publishing academic articles work against contemporary findings - an article published in 2017 may have no references that post date 2015 [or earlier] as that is when the research was conducted, but whilst some findings pass the test of time, many conclusions do not. For example, any comments with regard to social media marketing made in 2015 are not necessarily true for Internet users now. Similarly, online advertising has changed so significantly in the last two years that any research into its effectiveness that pre-dates, well ... now, is useless for anything other than history.
Also ... some of the academic research in the subject area is of dubious quality. A continuation from the previous comment is that later work often relies on the findings of earlier research without question, so making subsequent conclusions potentially flawed. In particular, meta-analysis [on academic articles] is popular in this field - I have yet to read one that questions the research rather than accepting the findings as presented. Also, a surprising amount of the research is conducted only on university campuses, with respondents being either [a] academics, or [b] students. Similarly, many requests to complete questionnaires are posted online - usually on social media.
Whilst this might be acceptable for some targeted research, , when investigating Internet use these samples are not reasonable representations of the population [social media users tend to stick together in 'bubbles' - the sample is likely to be made up of a similar 'bubble' to the researcher who posts it].
However - and I am not sure whether this is a compliment or criticism, but it seems most academic articles on digital marketing include in them somewhere a phrase something like: There is still a significant gap in our understanding/research of the subject area.
I also find that the results of a great deal of academic research actually tell us nothing new. Or rather, tell practitioners nothing they have not already discovered by trial and error [see example #1].
Also with regard to academic research, I find there is confusion in the crossover between computing, business and other subject areas - with examples of discipline experts making basic errors when they stray from their own field. This includes marketers making technical statements that are flawed as well as IT writers who - without the qualification or experience in the subject - making erroneous comments about business applications, or of specific relevance to this book, marketing applications [see examples #1 and #2].
Another significant flaw within academic research in the field is that it relies on previous academic research to maintain its validity. For example, in an article by Lowry et al (2014) the question of how quickly visitors judge a website when they arrive on it. Lowry - naturally - relies on other academics to support his assertion, saying that:
'Research suggests that 80% of web surfers spend just a few seconds viewing a site before continuing to the next site (Peracchio & Luna, 2006). Moreover, most web users are unlikely to look past the first few pages of a website (Thompson, 2004)'.
And yet the likes of Amazon knew these things in 1994. That would be 8 or 10 years before academics proved it. I knew them from reading the log files of
websites in 1996 - as did thousands of other website publishers. Furthermore, we knew them absolutely, with no margin of error in research bias.
So long as you know how to interpret them, computer-generated website analytics have no research bias. They tell you exactly how long visitors stay on
your website and how deep into it they go. But hey ... these facts are not in a peer-reviewed academic journal, so they are not trustworthy.
Furthermore, to conduct their research, Lowry et al used mocked-up websites to test their hypotheses. In real-life research, there is
no mock site - they are real. Lowry et al also concentrate on the impact of logos on perceived credibility of the site - which is fine,
but the logo is only part of the perception. In real life - using real-time multivariate testing - you
can check all aspects of the page that might influence a visitor's perception of the site's 'credibility'.
A related scenario to the 'it doesn't count as legitimate until it's in an academic journal' is this. I recently read a half decent journal article
called 'Challenges and solutions for marketing in a digital era' [Google it if you want to read it] - at least, it was half decent in 2013 when it
was published. Now, as I write [in 2019] it is useful only as background reading. Anyhoo, in that article was the quote; 'As a consequence, brand managers no
longer control the messaging they use to create brand strategies' which was referenced to three articles, one published in 2007 and two in 2012. But here's the thing:
I have a record of me stating at a [non-academic] conference in 2001 that ' ... one of the biggest impacts of the Internet on marketing is that the marketer
no longer has control over their marketing message'. Now, I'm not saying that [a] I was the only one saying it, and [b] it's unlikely that I dreamed it up myself
- but isn't it strange that in an academic article the notion doesn't count if it comes from practice? It's only 'academic' when an academic
writes the same thing in a published article. I wonder where Deighton, Fader and Moe & Schweider picked up the idea seven and twelve years later? I can't be bothered
to seek out their articles to see if they attribute the notion to me - or someone like me who is, perhaps, more deserving. Oh, I should add that my 'quote' is in the
first book I published in 2007 and in several after that. But hey, books - even academic text books - do not count as proper research in the same
way as academic articles do they?
A final flaw I encounter in academic research is partially related to reliance on previous academic research, but is one that I can only describe as ignorance of the real world. For example, I have read published work that:
- Assessed the quality of a website by a series of evaluations, one of which was the site's search engine optimization. However, the author used a totally inappropriate search term for that assessment (Google informs ad buyers on common search terms in all industries). The result - in real terms - useless findings.
- Analysed the value to the organization of paid search advertising versus offline advertising - but the researchers ignored (didn't know?) that the highest bid does not automatically give a search ad the highest listing and so the findings were so flawed as to be useless.
- A paper titled 'Popularity of Brand Posts on Brand Fan Pages: An Investigation of the Effects of Social Media Marketing'. The paper was published in
2012. Facebook stopped 'brand fan pages' in 2010.
What I find most frustrating about issues such as these is that those people who work in digital marketing know the things that some academics seem to be ignorant of. And that is one of - if not the - key reason I will not reference such articles in any of my books. Those books' primary objective is to help students understand digital marketing to an extent where they can find employment in the field. To direct them to flawed research as part of that learning curve will not only prevent them meeting that objective, it will hinder their progress.
Academic research, we are told, is used to test practical concepts. In some disciplines - predominantly scientific - this is perfectly valid. But where
human behaviour is concerned there will always be inaccuracies in the responses from participants (yes, I know this is built into research analysis)
but with computer-generated website analytics the data is absolute. There is no need for academic research to validate it. For example: all
other things being equal; if real-time multivariate testing of hundreds/thousands/millions of visitors to a web page stay longer if it predominantly
blue rather than green - then blue works best. Asking people if they prefer blue or green in a controlled environment can never give the same
degree of certainty. And yet we are meant to value that academic research more than data pulled from real-life events. I fundamentally
disagree with this notion - and this is reflected in my books.
For an insight into the kind of real user-based experiments conducted by the major websites, find yourself a comfortable chair, serve up a drink of your
choice [tea/coffee/beer/gin] and pass some time reading
Seven Rules of Thumb for Web Site Experimenters.
Experience meant that when it came out in 2014 I was already aware of most of the tales included in it [I'd used some in my teaching and books - and still do],
but if you are new to all things 'digital' - i.e. less than 20 years - this paper [it is not an academic paper] is essential reading.
My scepticism toward academic research is not, however, absolute. Of course there are papers out there which challenge conventional
thinking and so inspire marketers to re-consider practices. One which springs to my mind is A New Marketing Paradigm for Electronic Commerce'
by Donna Hoffman and Thomas Novak. Published in 1996 - and so written at least a year earlier - this paper predicts [almost] exactly what impact
the Internet has had on digital marketing in the years since that time. It's available online - take a look and see what you think.
Furthermore, it is articles in academic journals that bother me most. There is some very good research made by academics out there that is
conducted as just that: research. The best challenge the findings of other research or models/concepts that have simply become accepted with no real question.
As a footnote, however, it has to be said that my scepticism towards research also extends to that I refer to as practitioner or commercial research [I use many references to such in my books]. Independent bodies such as Nielsen deliver impartial data and analysis - but others have an in-built bias. An organization that sells software for use in marketing on Facebook will always present research into user's activity on the platform with a positive slant, for example.
My final comment on academic research is that before it is published it goes through the process of peer review. That is 'experts' in the subject area review the content for flaws. Hmmmmmm ... I'll let you judge for yourself how well this system works. You don't really want to get me started on the game that is getting published in academic journals :-)
Examples of shite research
#1 ... The Impact of Online User Reviews on Camera Sales.
#2 ... The effects of blogger recommendations on customers' online shopping intentions.
#3 ... An investigation of global versus local online branding.
#4 ... Right research ... wrong researcher.
How to cite this article:
Charlesworth, A. (2018). Academic articles: why are so many such shite ?. Retrieved [insert date] from AlanCharlesworth.com:
This page was first published in February 2018 ... but it may also have been updated or amended since then.
Some of the content is drawn from my books, other pages on my websites or my public rants on the subject.