Ai - Artistic Integrity - How Ai is being used to fool art buyers
How AI is being used to Fool Art Buyers
The advent of Artificial
Intelligence (AI) has undeniably brought numerous benefits to the art world,
revolutionising creative processes and expanding artistic possibilities, or at
least that’s how the teams behind these engines are selling it. But, there’s
also a dark side of exploitation and dishonesty that is beginning to blur both creative
and ethical boundaries and that’s something that could have a long lasting effect
on the art world.
The Other Side of the AI coin…
Whether AI really will turn
out to be the force that completely displaces and disrupts entire industries is
yet to fully emerge, and as much as I call out the pitfalls of depending on AI to
take over the creative process performed by artists throughout this article, we
need to remain mindful that AI can genuinely bring around positive change by
becoming another useful tool that an artist can legitimately use in their
business without risk or worry that the robot will replace them.
One of my latest retro artworks - Proudly hand drawn using a digital medium and no Ai! |
Whether artists turn to AI for
image generation, the development of new ideas and concepts, research, planning
out work, or the multitude of business related things that creative types would
rather see carried out by anything, or anyone other than them, AI, as much as
we think we dislike it today, will be something that we will all need to better
understand, or at least learn to live with in the future.
I’m not completely against AI,
it’s way better than I will ever be at generating titles, it can certainly
refine the often odd sounding titles that frequently spring to mind midway
through a work, and it’s really helpful in figuring out the metadata labels
that need to be applied to images being sold online so that they can be found
through search engines. Using AI to do these things literally saves me about a
day of admin each month.
Tasks such as search engine
optimisation take so much time away from the creative process so using AI to at
least lend a helping hand should be seen as a help rather than something that
takes away a human role. I’m sure SEO specialists are reeling right about here,
but most independent artists have very few or even no employees and they don’t always
have a budget for professional SEO services, so anything that can help them to
focus on higher value tasks is savvy business practice rather than anything
sinister.
I frequently use AI to help
with search engine optimisation, something I’m comfortable with because 90% of
my business doesn’t come via the web, and I’ve started to take advantage of it more
recently as a tool to reduce some of the administrative burden of running my
business. It doesn’t do everything for me, but it can be really useful for some
of the small tedious and repetitive tasks that allow me some extra creative
time. What I don’t use it for is the creation of art, partly because I really
enjoy the creative process and secondly, because AI art, well, it just isn’t
very good right now.
But something that can
completely change the playing field for artists is beginning to emerge, and
that is AIs role in more dubious business practices. It’s a dark side that’s
becoming more common and it happening more frequently lately, it’s even
celebrated through click-bait headlines on the web that tells us how a side
hustle artist/author made mega-money through selling AI work that took little
to no effort to create. Great if you don’t subscribe to any code of ethics I
guess.
There’s certainly a trend
right now of inexperienced non-artists and even would be authors chancing their
arm at an art career or more specifically, an art or writing related side
hustle. They’re probably thinking that creativity is a well paid and easy, work
from home vocation. Experience suggests that it’s neither truly a work from
home vocation, and it’s definitely not easy, and as for well paid, well someone
is making money in the art world but from experience of the past four decades, the
real money is made in the secondary art markets.
The use of AI to generate art which
is then offered for sale to the art buying public transitions from something
fun to becoming an issue when it comes to freelance work or quality control.
The speed that AI can generate work means that markets can become flooded with
sub-par work very quickly, and because the work takes little to no time to
create, it also makes it incredibly inexpensive to create.
Forgotten Play Days - Part of my forgotten consoles series of works. Again, no Ai here folks. Just time, lots of time... |
We’ve seen this before with
print on demand services offering batch uploads across pre-built templates.
Sure you can have a keyring with my badly cropped image of a stolen positive quote
in red, green, blue or yellow, hey, you can even have it on a Sippy cup. Makes
me sound like some art purist, but we should at least expect some element of
quality control in the art world. Just because you can, doesn’t mean you
should, you’re diluting the market including your own.
Not everyone who buys art
spends the kind of money that might be stereotypical of the sales figures
recorded in the media whenever art sells for millions of dollars. A lot of art
is affordable, the markets for inexpensive home décor art are fundamental to
the success of many independent sellers who use store fronts such as Etsy. This
kind of art market is a high volume business, it also collectively generates
more income than the fine art markets where the multi-million dollar works are
sold, but the overall income is then devolved to multiple artists.
There’s also a growing number
of relatively well established artists who have decided to utilise AI image
generators without disclosing to their potential buyers that the image has been
created using AI. Ethically and morally, I’m not convinced this is the right
way to go, it feels completely disingenuous to exchange art for money under a
misconception that the work had been created by the artists hand. Artistic
integrity should be the foundation on which art careers are earned and this
opaqueness is further muddying an industry that already has its fare share of
trust and integrity issues. So there are some challenges in the art world when
it comes to AI but as AI models mature, maybe the real challenges are yet to
emerge.
A Brief History of AI Art Generation…
Art created using artificial
intelligence is often predicated on the end user typing some simple text
descriptions into a web page or application and allowing the AI engine to generate
an image based on whatever dataset or multiple datasets that have been used to
teach it, but it wasn’t always done like this.
Art produced using AI isn’t
something that is entirely new. Early AI models appeared back as early as the
late 1960s, with the first system of note being debuted in 1973. That system
was Aaron, developed by Harold Cohen, a British artist (1 May 1928 – 27 April
2016). Aaron was a computer program with the specific purpose of generating
what at the time was called, autonomous art. When it arrived it gathered a
great deal of attention and it was displayed in a number of exhibitions at
various museums and galleries, including the Tate Gallery in London.
The difference between Aaron
and modern day AI art generation however, is significant in that modern AI
image creation is accessible to virtually anyone with an internet connection
and a keyboard and there is no requirement on the end user to even begin to
fully understand how it all works. There are very few barriers and the price of
entry into the space starts at the princely sum of free.
There were plenty of barriers
in the late 60s. Aaron had been literally created by Cohen and a level of skill
in creating the model that then created the art was required. Today, the heavy
lifting has been done by others and AI art creation for most people is little
more than a single click of a mouse or entering some descriptive text.
That’s really what sets apart some
of today’s so called AI artists and Cohen’s use of Aaron. Cohen did more than
enter descriptive statements to generate the work, he essentially created the
virtual brush and paper with a specific intent to create, what was at the time,
something that hadn’t been seen before. To some extent it could be said that
Aaron was just as significant as the output it generated and was in itself,
intrinsic to the end result.
Everyone’s an artist…
The introduction of modern AI
tools has made it incredibly easy for anyone with a computer or mobile internet
connection and five minutes to spare to generate what on the surface at least,
looks like incredible artwork in the style of almost any artist in the history
of art, ever.
Social media is awash with
output from the latest tranche of apps and websites, all providing the user
with an experience that needs little to no instruction to use it and in many
cases, these systems are either available freely or at a minimal cost.
AI engines such as DALL-E 2,
Craiyon, Adobe Firefly, and even Bing which can create images using just Microsoft’s
Edge Browser have become as well known as the phrase ChatGPT. It’s fair to say
that the market for these kinds of applications has grown enormously over the
past couple of years, and for most people, these systems are fun, but they’re
not really fulfilling the full potential that AI can offer, but I think that’s
the point, if AI is to succeed in the consumer market you have to give
consumers something that they can get on board with.
AI image generation is already
exponentially more advanced than it was even six-months ago and the fast pace
of AI development means that it will become even better in time. A point to
bear in mind here is that to date, AI development has clicked along at a rapid
pace, and just as we’re thinking that it will make some kind of exponential
leap in the near future to give us what we think we really want from it, things
might soon start slowing down in some sectors of the industry because the
industry is facing a myriad of supply and demand bottlenecks that might not be
all that easy to resolve quickly.
Those bottlenecks are already
beginning to slow things down, driven by the demand for GPUs, CPUs and other
components needed to train the AI models and run the data centres. These chips and other components aren’t even
close to the consumer grade chips we recently experienced a global shortage of,
these are ultra-high end, high-specification, massively powerful and much more
expensive than the chips most people will be familiar with and even when
they’re more abundant in availability, they’re not always that easy to source
and procure. You don’t tend to pick these things up from the local PC store.
People close to the supply and
manufacturing side of the industry are already starting to hear the loud
sucking sound of these high-end components are making as they fly off the shelves
leaving a void with little to no stock available to replace it. The companies
gearing up to move into AI in the future are already in the process or indeed,
have already secured the components they need, or at least the savvy ones will
have. The rest who have been slow to adapt, I think they’re going to fall
behind rapidly.
The issue now is the lack of
production facilities with the capability to create new chips and when most of
the demand is currently falling on a handful of manufacturers who are
themselves struggling to secure the raw materials, it’s a technology problem
that could slow down the exponential leaps forward that we’ve been seeing for
the past couple of years.
Despite these supply issues, the
pace of AI’s development behind the scenes is happening so quickly that what we
think of as ground-breaking today will looked massively outdated tomorrow, but
it will be the supply of the technology that will be needed in the future that
will ultimately be the determining factor in just how quickly we get to see any
of these next generation AI systems anytime soon.
Given these bottlenecks we
might even begin to see smaller AI models that rely on less computational
power, and there is some wider benefit in that. Smaller models can be more
focussed and bring better results, they’re not as overly reliant on the latest
technologies and they’re generally less expensive and less environmentally
draining to operate.
That’s something else that
could ultimately decide AIs fate. The current AI engines rely on having access
to lots of power and consumers are beginning to ask questions and become more
focussed on companies carbon strategies. As governments race towards net zero
targets, it’s unclear how well most of these large AI models will perform
environmentally in the future and the hype train for AI, at least right now, seems
to have overshadowed the burning question, how do you make AI more environmentally
friendly?
Smaller AI models might be the
answer to some of the environmental questions and there is also that wider
benefit in that they can also be trained on very specific datasets. These
models already exist today for things like medical and scientific research, and
the results that those models generate are generally laser focussed on
providing a specific outcome, which also means that the best ones are usually
very good and very accurate and massively more efficient than the lar datasets
that more generalised models use.
The issue with large data sets
and AI engines which are designed to be everything to everyone, is that they then
have to be massively scaled and when you scale technology, a conversation has
to be had around just how sustainable this technology is, not just today, but
in the future too. Technology at scale doesn’t always offer savings in either
money or power.
AI will need to evolve not
just in providing iterative jumps that make the models more adoptable, but it
needs to evolve in terms of the computational power and energy consumption
these models require. It certainly needs more efficient algorithms that are
less reliant on using as much power as they need today. Better algorithms that
are more efficient can have a significant impact on what’s actually needed to
run these systems and one of the problems often seen with developers is that
they have become overly reliant on resources and they create less efficient
code. Go back to the 80s and look what companies like Atari did with just 4K of
RAM, that’s the kind of efficiency that will be needed again.
The promise of Quantum
Computing will ultimately have the potential to solve complex problems with
less computational power and it could deliver better results faster, but the
promise of mass quantum computing is far from being fulfilled anytime soon,
certainly in a way that makes it affordable to those who might get the most
benefit from it. We’re in the midst of a global economic crisis and science
budgets are being cut.
Designing specialised hardware
tailored for AI tasks, such as application-specific integrated circuits (ASICs)
or field-programmable gate arrays (FPGAs), can greatly improve energy
efficiency. These chips can be optimised for specific AI workloads, reducing
the need for general-purpose computing resources and their use reduces the energy
needed for tasks to be completed, but the issue here isn’t that the technology
doesn’t exist, it’s that the manufacturing process is carried out by so few
companies. You only need to look at the cost of a Terasic DE-10 Nano board to
know that there’s a supply and demand issue with FPGA.
I suspect that in the future
we could begin to see some of this specialised hardware evolve into Neuromorphic computing technologies, a technology that
aims to mimic the brains structure and function. This is a technology that can
lead to highly energy efficient AI systems, but it’s really complex. The idea
is that these systems can process information in a more parallel and adaptive
way, similar to how the human brain works. The downside is that engineers and
researchers with expertise in neuromorphic computing are relatively rare, which
drives up the labour costs associated with developing and maintaining these
systems. Efficient it might be, but it’s also incredibly expensive when the
expertise needed to operate and build the model just isn’t in place and no one
seems to be doing much in the academic space to encourage people into the
industry.
Joystick and Game Included by Mark Taylor - another retro work and another forgotten console from the 80s... still no Ai... |
There are other ways in which
AI could become more efficient and more environmentally acceptable than it is
today, but the models powering the AI will need to adapt. Techniques like knowledge distillation will
eventually need to be used. This involves training those smaller and more
efficient models I spoke about earlier to replicate the behaviour of larger
models. Model pruning, on the other hand, involves removing redundant or less
important parts of a model. Both approaches reduce computational demands
without sacrificing performance and both models require less computational
power, and less energy to function.
Another way that AI might
begin to address some of the power and component issues is by doing the exact
opposite of training smaller datasets. Transfer and Few-Shot learning models
involve large datasets that are then fine tuned for specific tasks with smaller
datasets. Few Shot Learning goes beyond this, it is trained on models that
contain fewer examples meaning less power is needed and the theory goes that
smaller, more focussed training sets will provide far better results on any
specific subject that it has been trained on.
Data Centres are another
incredibly expensive resource, anyone who subscribes to a cloud storage account
will have seen at least an element of shrinkflation, less product for a higher
price, and with many cloud storage providers realising that energy costs are
significantly higher lately, the general trend has been to reduce the amount of
storage available to end users while usually increasing the price they pay. Not
all that long ago we were inundated with free unlimited cloud storage offers,
today those products have limited the space and most of them now charge a kings
ransom for anything useful.
To get better, data centres
themselves need to become more efficient, particularly those that deal with AI
engines. They really need to be laser focussed on renewable energy to power
them and they also need better and more efficient cooling for the vast amount
of computational power that inevitably generates more heat.
Where AI models are
replicated, we have to bear in mind that the power needs exponentially rise. Developing AI systems
that dynamically allocate resources based on the task's complexity can prevent
overutilisation of computational power. This could involve scaling down
resources during low-demand periods and making sure that high use periods
happen only when energy consumption is generally lower and more affordable.
To achieve some level of
environmental nirvana, it really needs some collaboration to happen far more
than it does across industries and research institutions. Collectively, if they
can accelerate the development of energy-efficient AI technologies by sharing
knowledge, resources, and best practices it could lead to faster advancements,
but everyone in the AI space right now seems to be focussed on being first past
the post where the short-term financial rewards are going to be higher. The
problem with this approach is that development will eventually crawl and it
will ultimately be more expensive both in a financial sense and even more so,
in the environmental impact it will have as the technology becomes more
abundant.
Ultimately, achieving a more
environmentally friendly AI involves a combination of innovation in algorithm
design, hardware development, energy-efficient infrastructure, and responsible
decision-making by AI practitioners. As AI continues to evolve, a concerted
effort towards sustainability will be essential for minimising its
environmental footprint and that’s another point to remember the next time you
fire up the latest AI image generator, it might save you creative time and it
might create passable art that sells into a short-term, trend following yet,
shallow market, but the environmental impact alone if everyone is doing it,
should raise some strong moral and ethical questions. I’m not sure as an artist
that you could even begin to claim that you are environmentally sustainable if
you rely on masses of backend power, even if you’re not paying for it and art
buyers are definitely becoming more environmentally astute.
Spotting when AI is being used…
I suspect in time, AI will
become smart enough for us to be unable to distinguish whether something was
created using AI or not, and that could compound some of the issues we see
today when AI is used but not disclosed. Today, making the distinction between AI and
human art is possible with some visual training, but we shouldn’t take it for
granted that it will always be the case. AI is relatively still only at the
start of its journey.
Even today, the images created
by AI are a huge leap from the days of Aaron, some of todays output is even
passable as artwork that wouldn’t entirely look out of place in a gallery with
some subjects looking almost indistinguishable from those drawn or painted by
human hands. They’re certainly a long way away from the neural network images
that became popular among mobile users in around 2016, an idea that created
abstract works based on photographs and images you had supplied and uploaded
into the apps and very much a precursor to the technologies that we are seeing
today.
But, there are some
limitations with todays AI which to the casual viewer might not be immediately
visible. Once you develop a slightly better understanding of AI’s current image
creating limitations you begin to notice the cracks and the similarities that
exist between AI works.
There’s the lack of detail and
the distortions for a start, flaws that just wouldn’t be present had the work
have been created by a human, even an inexperienced human who has rarely
created art. The flaws look like computational errors which would, almost
ironically, be very challenging for a human to create with the same effect. At
the moment, so long as you have some basic knowledge of how AI images are
structured and generated it’s relatively straight forward to determine when
something is and isn’t using AI.
If you are aware of the
limitations, it’s possible even with a small amount of visual training to spot
images created by AI, particularly images created with the more recent AI
engines such as Adobe’s Firefly and DALL-E 2. If you have been exposed to human
produced artworks over the course of an art career, the task of identifying AI
becomes even easier.
AI images tend to also lack
the emotion that will be more obvious in a work created by a human. AI can create
some pretty soulless creations at times and we’re maybe, even with the pace of
current pace of change in the technology, still half a decade or more away from
AI that can replicate some of those personal nuances that humans bring to a
work of art. Remember, AI is trained on data sets that have no idea of context
from a human perspective, that’s also why we see so much bias occurring in some
of AIs output. What comes out of AI is only ever as good as what goes in, in
this case, the data the AI is trained on.
When it comes to AI image
generation, today it feels like a by-product of everything else that AI can do.
As it evolves, for artists, it will eventually serve one of two purposes, it
will either help enormously with the creative process or it will become the
force that displaces an entire industry, which road it takes will be largely
determined by how artists begin to embrace it, or move away from it, and to
some extent, the public’s acceptance of AIs role in creating art.
Why can’t AI draw hands?
One of the big problems with
AI image generation today is that it relies on what I’ve come to term as the
billboard effect. When you look at an image created by a human, be it a
photograph or some other image, what you look at tends to be in high
resolution. Mostly, printed images are created by thousands upon thousands of
small dots printed by an inkjet printer or represented by pixels on a screen. In
short, most images that you will be familiar with, have detail that is clear
even when looking at the images for a prolonged period of time.
Billboards can be printed
using very low resolutions, these images are created with far fewer dots and
nowhere even close to having the same level of detail that would be found in a
fine art print. The billboard images work because we’re viewing them, often only
briefly, from a distance away.
Once we get around 650 feet
away from an image, our eyes can only resolve around one pixel per inch. While
300 dots per inch seems to be the golden rule for print resolution, in truth,
it’s not a super helpful number that can be applied to everything and it also
makes prints more expensive to produce. But, it is an accepted standard that
reproduces the detail needed for the best fine art prints, it’s also
unforgiving for artists because at that resolution, any discrepancies in the
work are easier to spot.
A billboard is usually printed
at 15dpi, the advertising on the side of a bus is usually no more than
72-100dpi, and many fine art prints are usually printed at 240dpi, although I
do tend to stick to the regular 300dpi because I put so much effort into
creating intricate, even small details into my retro works and some of the
prints I offer are sold at a size that would be large enough to notice the
difference up close. AI models found
online are usually creating work at a screen resolution of somewhere in the
region of 72 dots per inch. In short, not really good enough to hang on the
wall.
So, when you look at low
resolution images on a billboard from a distance you are doing the heavy
lifting by resolving the missing detail. You generally only ever view a
billboard for a short period of time and you are usually standing too far away
to be able to view any detail even if it was present because of the shortcomings
in not being able to process more than about one pixel per inch from a distance.
With AI images, it’s kind of
the same thing. It’s an approximation of an image, but get up close and study
it for a few seconds longer and the cracks begin to show. We might stumble
across AI images in the news or online, both of which would be providing us
with images that have much lower resolutions than would be expected from a fine
art print, but we will usually move on much more quickly than if we were
viewing a high resolution artwork that has been professionally printed and hung
on the wall at home or in a gallery where we’re also closer to the work. AI
image generation is more smoke and mirrors than we might initially think it is.
AI simply doesn’t do detail
very well when it comes to images. It also struggles with other things too, so
we will often find misquoted information or information that makes no sense and
it hallucinates more than a hippy on a mushroom trip at times. When AI is
focussed on a particular task and is less generalised, it tends to perform way better,
hence medical research which is predicated on learning from those very specific
small data models, which also make it generally much better and faster than a
human.
When AI becomes part of a
consumer facing system that tries to be everything to everyone, it struggles. In
part, unlike humans who are usually inspired by reality, AI is inspired by
whatever dataset it has been trained on and it literally doesn’t know anything else.
At best, this means that it can only ever create an approximation or
reproduction because it has no real context or reality to base the output on. Until
the issues of context and emotion are included in the data sets, I think AI art
will continue to struggle because there will be missing data and missing detail
that humans understand because they have life experience to baseline it on.
If you look at AI generated
hands, they tend to be deformed for one reason alone, the datasets containing
the images used to train the AI will be more likely to have featured faces, and
faces tend to be much more prominent than hands when you look through any photo
collection.
That said, if we look at how
much hand generation has improved even since the beginning of this year, we are
now getting to a point where even hands are becoming more identifiable as
hands. It’s not that AI is getting better, it’s that the data sets are getting
bigger and more varied. There are still plenty of tells though, the occasional
extra thumb, hands appearing in an anatomically incorrect place, these are the tells
that AI is being used.
Unnatural textures, either too
smooth or perfect or that don’t look quite right or are shown in the wrong
context can also be an obvious tell, as can plastic-like skin and unnatural
skin tones, or the ultimate giveaway this month, retro-futuristic colour
palettes. When recreating materials, often the materials will look overly
uniform, there may be no creases or shadows. If there are obvious brush
strokes, look for replication of the same brush stroke within the image, this
would imply that a digital brush algorithm is being used.
There will be other inconsistencies
or objects that appear to be out of place, any light refraction included in an
original photograph used to train the AI is likely to still be present in the output
and it is often represented by weird black dots and lines. Lighting is another
giveaway, a human artist will instinctively know that reflections are generally
uniform and light is cast often from a single point, especially in landscapes
when either the sun or moon is the light source, AI generally doesn’t
understand how to render light sources that well so it casts misplaced shadows
and reflections.
If you zoom into the image,
you might notice unnatural poses and expressions, stark colour changes that
make no sense, artifacts left behind from the original source, pixelation in
some or all areas of the image, or an halo effect that looks like it should be
fixed with an application of gaussian blur in Photoshop. You will especially
see this in fake images of anything usually in flight. UFO photographs can
often be debunked within seconds when AI has been used.
Another giveaway is that the composition
of the image tends to be disjointed, or the image may be unnecessarily cropped
in a way that makes little to no sense. Artists are usually taught composition or
they will pick it up as they gain experience, AI has certainly got a way to go
before composition becomes natural, partly because it has no reference or
context other than the training images it has been trained on. Size and scale
are difficult to comprehend in a photograph unless you have some other context
or other point of reference.
Repetition and unnatural
patterns are another area where AI struggles. Misaligned repetition of a
pattern is often the first clue, but also look out for patterns that would
first appear to fit together if they were to be realigned, but would then still
not match perfectly. AI, is still incapable of creating perfection.
You might also want to conduct
a reverse image search of the image in its entirety and sections of it. The reverse image algorithms will be typically
looking at the same reference photos online that had been used to train the AI,
so it’s relatively easy for them to pick out elements of an image that may be
from another artists work.
It’s also worth looking for
the artists signature, if it’s present and in the usual place (most artists
will sign their work in the same way, usually in the same area of their work), you
should look for smoothing of the area where the image looks like it has either
been healed or cloned from another section of the image. In instances where the
signature or watermark has been removed smooth areas tend to mean that the
image has been manipulated and sometimes the area looks as if it has had a
section of the surrounding area pasted over the signature.
Forgotten From Taiwan by Mark Taylor - this sold better than others, today it is collectible, again, plenty of Easter Eggs to find in my latest works... |
You might be wondering how
easy all of this is, but on a recent social media scroll I managed to pick out
a dozen or so images that had definitively been created with the use of AI, and
none had been disclosed as being produced using artificial intelligence. Just
to be clear, there was no need for me to perform any kind of forensic
shenaniganry to categorically prove each image was AI, it was evident visually
but backed up with a screen shot and reverse image search. The whole process,
well it took me less than 10-minutes.
What gave the game away for
these images were that two were juxtapositions of another artists work which I
was familiar with, and both were essentially the same image. Another used a
colour pallet that would only make sense with an entirely different subject and
there was no smooth gradation of colour, it was stark and pixelated in a way
that wouldn’t indicate that the image had been badly resized. Out of the ten
images, a number had been uploaded to a print on demand service and were
available for sale, again with no disclosure of the process of using AI in the
marketing description. If you are familiar with the capabilities of any of the
major digital art applications such as Photoshop, Corel, or Serif, that alone
will give you a good foundation on which you can become a master sleuth in this
field.
My experience which spans back
to day one of Photoshop and before that, Delux Paint on the Commodore Amiga
computer and even earlier 8-bit micro’s in the 80s and their rudimentary
imaging programs, and having used almost every digital art application since,
this long-term experience means that I
am probably in the unique position of being an official anorak in this
department. I can tell the difference between photoshop being used and Corel,
but with only minimal visual training I really do believe that almost anyone
can currently master this dark art of distinguishing an AI image from the real
thing, and I would certainly encourage art buyers today to be extra cautious
when buying work they believe to have been created solely by the artist.
That brings me nicely to
another point. Most savvy art buyers will already be looking out for these
visual cues and it’s not just art buyers who buy the most expensive works. As
AI becomes more widely accessible, that means that those people who currently
buy work have the same level of access as anyone else to the same AI tools.
Some of the more recent conversations I have had with my own buyers mostly
confirm that they’re quickly becoming adept at spotting when something is amiss
with a piece of work, and more of them are rightfully asking the right
questions around whether AI had been used in the creation.
The AI Artists Can No Longer Catch Everyone Out…
When AI art is generated by a
person who has minimal experience with AI prompts, I suspect that some of this
missing detail is also the result of the artist having a lack of understanding around
how AI algorithms are programmed. The models AI image generators are being
trained on are getting better, but the results we’re still seeing from
inexperienced AI artists aren’t quite reaching their true potential just yet.
To get better results you need
to start out with very specific instructions and additional modifiers need to
be progressively included to change the conditions on which the final image is
generated. Having an understanding of composition and artistic styles would be
useful here, without that background knowledge, AI derived art will always look
a bit soulless and often generic.
It’s having that extra level
of art and design knowledge that would make an AI image more believable, but
many who are casually creating art on these platforms will take whatever comes
out. The instructions we type into AI are called prompts, but prompt modifiers
can be used to refine the results. If the modifiers are grounded in the user
having a knowledge of art history, the modifiers can be made much better and
the results will be much more believable.
Prompt Modifiers…
Your subject is what will be
included in the image, it becomes the main focal point, so a modifier would not
only include the subject but would also include for example, a specific
background, a particular location, or an action. Modifiers give the subject
additional context because without the additional context provided by a human,
AI is generally very prescriptive. Ask it for an image of a frog and you will
get an image of a frog, but ask it for an image of a frog sitting on a tree
branch in the jungle surrounded by exotic flowers and you then have much more
context which the AI can work with to level up what could otherwise be a rather
bland image of a lonely frog.
You can further refine the
prompt modifiers to include things like actions, so the frog could also be
reading a book. If you think back to the lesson on verbs during your days at
school, words such as: fall, run, jump, push, pull, play, sit, stand, can all
be used to give even more context to the prompt, each time adding a further
element of detail on which the AI can do its thing.
Beyond actions, you could also
apply additional modifiers such as giving it the context to produce the work in
a specific artistic style, this could be photographic or abstract, equally it
could be panoramic or it could be in the style of a particular artist. If the
model has been trained with images reflecting those prompts, the output again becomes
more believable.
Macrophotography is something
that can be recreated well with AI, there’s less detail in the background to
get wrong, but what you are more likely to see from an inexperienced AI user
with little to no artistic knowledge is usually the same image that anyone
could generate with a very simple prompt. That’s another way of figuring out
what’s AI and what’s more likely to be original, try and recreate the same
image using a simple description, chances are, the output you will see will be
the same or a very similar image.
Prompt modifiers for image
generation can even extend to materials and mediums, lighting, colour palette,
perspective, mood, or even an era or historic period in time.
From what I have seen on
social media lately, people do seem to be over-obsessed with steam-punk and
retro-futurism, probably because those are also two design trends that have
come back into vogue of late and those two styles are often used to create
example works.
Each of those modifiers that
are included in the text prompt will incrementally bring more life to the
image, but they could also bring more problems. Hallucinations are a real thing
with AI which probably explains some of the really weird things we have been
seeing as people dabble with the online image generators for five minutes
before declaring themselves a real digital artist.
Most casual AI users will
start out only entering very basic prompts and the output will only ever be as
good as the user input together with the complexity of the data models used to
train the engine. Overuse of modifiers can also lead to issues, so to maintain
some consistency and quality, some generative AI technologies will limit the amount
of context and the number of modifiers that can be used.
That’s where AI image
generation begins to fall apart, right now it’s a maturing technology that has
yet to fully demonstrate its potential. In the great scheme of things it’s been
around for only five minutes compared to the time period that artists have been
creating art and the current AI tools are essentially and arguably, more or
less only the first real generation of tools that regular consumers can
actually play with.
There are some other issues
with AI that haven’t as yet been resolved. The training data is one of the
biggest issues. AI image generators are trained on massive datasets of
images. These datasets typically contain a wide variety of images, but they
often have a bias towards certain styles or palettes. For example, a dataset
that is heavily weighted towards anime images will likely produce AI images
that have a similar anime style and if the data sets have been limited in this
way, that will limit what the AI engine is capable of delivering regardless of
how well you craft the prompts.
The algorithm is only ever as
good as the data set used to train it, if that dataset includes any biases at
all, the AI is then compromised and will include the bias in the output and a
prompt modifier isn’t going to change that bias, other than possibly adding in
further biases.
With art, that bias could also
be in the form of cultural misappropriation, especially if the datasets used to
train it included images of specific cultures or elements that are scared,
sensitive, or otherwise important to specific communities. If the datasets
didn’t address context, the biases could be reinforced and even amplified.
There’s a real risk that AI
could use an amalgamation of various images that each have sensitivities and
then generate an image that contains not just biases, but other issues that
would make the output even more challenging. As we’ve learnt through history,
not everyone will be attuned to the nuances of cultural misappropriation, and
AI definitely struggles in this area. Professional artists will almost always
be more sensitive to those specific issues and will be much more careful about
what they release.
The algorithms that are used
to generate AI images are also a factor. Some algorithms are better at
generating certain styles or palettes than others. For example, some algorithms
are better at generating realistic images, while others are better at
generating abstract images. Bias will also be visible in the output here too,
if an algorithm is biased towards a specific art style, the end results will
always be tinged with elements of that style.
As with all things consumer
facing in the AI world, what we get to see today isn’t what the big players will
already be working on for tomorrow. It’s an early foray into AI for most
consumers, but frankly, that seems to be just enough to convince a bunch of
folk who haven’t picked up so much as a crayon since their formative school
days to convince them that they are a real digital artist and there’s a problem
with this, especially around cultural misappropriation, but also around
intellectual property rights, copyright, and of course, a market saturated with
un-curated AI generated content. This distorted view of what a modern day
artist is, could ultimately present the art world with another challenge that could
have wider implications for the industry in the future.
An art world that could fall apart…
The introduction of AI and
specifically, AI that is consumer facing, becomes a problem for artists who
have sunk literally decades of their lives into mastering their craft. If
casual art buyers are happy with an AI generated print that can be produced for
pennies or for free, even if it has some glaring artistic and design issues, they
are more likely to use AI than pay the human overhead and especially at a time
where art to some extent, or at least art prints that you might buy from big
box stores is seen as decorative, and in some cases, even disposable.
When you change the colour of
the curtains, you can easily change the print and in a cost of living crisis as
we’re seeing globally today, value is becoming more of a driver in the art
world that isn’t the art world that bids on an original Matisse in an auction
room on a Tuesday night. That should be concerning because that Tuesday night
art world represents considerably less income than that generated by other
markets for art where the art is priced more affordably to the masses who
purchase it.
When you give the public the
ability to own a digital paintbrush that is only confined by words and
imagination, the competition for existing artists then becomes everyone who
owns a device capable of accessing an AI tool and who opens up a print on
demand account. We might even get to a point at least temporarily, that you
could now be competing with the exact same people who would once purchase your
work.
This stuff is also addictive. AI
image generation from the comfort of your sofa can feel empowering, providing
nothing more than a well crafted prompt anyone can now create almost any image
imaginable, and they won’t stop at one or two images, there’s a certain challenge
to be had in bettering your past efforts.
The issue here is that at some
point, if more people believe that what they are producing is worthy of an
upload to a print on demand service or being sold through some quickly strung
together online store, we will end up with market saturation and an inability
to find unique works crafted by hand.
The impact of a saturated art market
will be felt initially by those current artists who use their hard earned artistic
skills and talent to sell relatively low cost prints to casual buyers who are as
focussed as much on value as they are on the aesthetics of the work.
The same can be said for those
with tight budgets and an ambition to write their first e-book. I might charge
around £X or $X to create a bespoke book cover which would potentially include
many hours of work over a period of weeks or months. A new author might not
have that kind of budget available, nor might they appreciate that books really
are judged by their covers, and if they’re writing down a description so that
an artist has at least some kind of design brief, there’s a fork in the road
that the author can now take.
Once the writer has created
that brief they can either type it into an AI engine and generate multiple
images until they’re happy, or they could hand that brief over to an artist
along with the money. My experience tells me that they would be much better going
with the expertise they would get from commissioning an artist, but most
writers who are just dipping their toes in the water might not be as yet aware
of the benefits they get from using human experts over AI to create the imagery
needed to sell books.
If they feel the result
through AI is good enough, that’s often more than enough for those who can live
without the specialist advice and support, to nudge them in the direction of AI
which then takes away the need to hand any money to an artist. Of course, they
won’t have the support that an experienced artist will bring to the table which
could ultimately result in many more book sales, but a new author might not get
the volume of sales that would make the expense of a skilled artist an absolute
requirement.
In most cases, especially when
all that’s needed is a thumbnail for an e-book cover, the result form AI is
going to be fine, so much so that I’m not entirely sure that I would advise new
writers against doing anything else other than to utilise AI and certainly in
the early days of being an author, unless you can categorically say that you
are in it for the long haul and you have a real belief in what you have
written. Let’s be honest, a whole lot of first-time self-published books fail
early on.
It's no different from the
situation we already have with e-books. It’s easier than ever to self-publish,
but take a good look through any e-book store online and you will find
short-form books with incorrect grammar all over the place and because you can
publish quickly, many of the e-book stores rapidly become flooded with books
that are little more than brief PDFs. What might have once been included in a
blog post is now monetised to the point where the income is still likely to be
better than the ad-revenue you might get from
publishing an ad-supported blog.
It's the same story for other
services where the commissioner feels that they can get away with using AI.
Logo’s, digital ads, flyers, exactly the kind of things that would historically
have been commissioned through an artist or graphic designer. The results won’t
be anywhere close to the results you can get from using an experienced hand,
and the advice that a client would usually be reliant on to guide them through
what can be a complex design process just won’t be there with AI but the cost
savings will ultimately become the driver and quality becomes secondary at
best.
We can already see plenty of
articles online that outline how someone had been able to give up their nine to
five and utilise AI to generate dozens of e-books, and if you are producing in
volume there’s a much better chance of selling enough to make some level of a
living wage, but there are all sorts of issues around whether what is being put
out there does or doesn’t include the intellectual property of others.
Real Fear or Misplaced Fear of AI…
Plug and Play by Mark Taylor - so many accessories and it was still a flop. Highly collectible today, as should this artwork be! |
A lot of artists I have spoken
to recently have told me how much they fear being completely displaced by AI in
the future, and especially those who work in fields that could become much more
susceptible to automation. If you are working on repetitive patterns and AI is
then able to recreate that pattern, it probably is more suited to carrying out
repetitive tasks more efficiently, but as I said earlier, repetition doesn’t
always come out perfectly with AI, at least not yet.
This is something that today’s
artists could eventually embrace. If the initial work to create the pattern or
design is carried out by skilled hands and human emotions, there will be
obvious benefits in taking an AI model to increase productivity. It has the
potential to drive down the cost of repetitive, low-level tasks or even take
costs completely away, so it’s a no brainer that AI should be used.
I think the issue we have
today is that AI is already being looked at as if it’s some golden panacea that
will reduce the need for expensive human intervention for pretty much
everything, and if business owners are not fully aware of AIs current
limitations, there’s a real danger that they only realise its failings and
shortcomings after things have gone wrong.
There is an underlying issue that needs calling out…
Technology is increasingly
being used by the bad players, be that to scam artists to part with cash on the
premise of a social media user reaching out to suggest that they would like to
make a purchase of the artists work, but as an NFT. The scammer will then
insist that the sale has to happen on a specific NFT platform which has been
created, you guessed it, by the scammer. If you didn’t guess, you might want to
be really cautious about engaging with users on social media who reach out to
buy your work as an NFT. You might also be a really strong candidate for buying
my latest chocolate fireguard.
AI is being increasingly used
to make scams more believable than at any time before, and scammers are already
seeing a greater number of social media users falling for them. Scams are
typically set up with budgets, you usually need people behind the scenes, often
lots of them, and people are generally expensive, but some of this back office
work to bring a scam to life is now being done through AI and that takes away
the cost of running a scam.
Deepfakes are AI generated
videos or audio recordings that are made to look or sound like someone or
something they’re not, often swapping out faces or creating new recordings that
look and/or sound like real people. These scams are often used to spread false
narrative, and with global elections not too far over the horizon, it’s
entirely plausible that a rogue government or dictatorship can spread
legitimate sounding information that resonates with a cohort of the population
whose beliefs align.
ChatGPT Phishing is now a
thing, of course it is, and who didn’t see this one coming! For those who need
a recap, ChatGPT is essentially a chatbot that runs on an open platform which
can be misused to create authentic conversations through an online chat screen.
Scammers are using the system to create fake customer service portals where
they then harvest the data the end user willingly supplies, and the
conversation can be persuasive enough to get even the most internet savvy user
to part with deeply detailed personal information.
Voice Cloning is a very
different issue to deepfakes, yet no less sinister. Scammers use the voices of
real people in phone calls, voicemails, or other audio captures where a users
voice identifies them. To train the models, an AI needs only a handful of words
from the victim to then generate an entire vocabulary and then go and have a
conversation with say, your bank. It could also mean that you receive a call
from a familiar number where the number has been spoofed but to make it more
believable, the clone might be trained using the vocals from someone working at
your bank.
Verification Fraud is the next
generation of the dark art of forging passports and official papers. Once an
industry reliant on wayward artists, AI has even replaced these usually highly
skilled fraudsters and I think that’s such a shame really, I’ve been fascinated
by the work of some of these artists/fraudsters for decades, not that I condone
it, but come on, artistically, you have to appreciate their level of skill.
The documents created by AI
are often indistinguishable from the real thing and significantly better than
those created by hand, even the traditional artists who created this type of
work would struggle to get the same level of quality, bearing in mind that the
data sets here would be more textualised and probably easier for AI to deal
with. I think everyone can see how badly this one ends, we often need to prove
identity not just at elections, but to open a bank account, sign up for a loan,
this list is endless.
The really contentious issue…
As far as scammers go, if
you’re really careful about what you share and who you share it with, making
sure that you carry out due diligence and always remember that if it sounds too
good to be true it usually is, you have a better than average chance of not
becoming a victim. I say that with the caveat that scammers evolve just as
quickly as the next leap in technology and it will only be a matter of time
before they come up with some new way of getting you to part with either your
cash or your identity, the two most valuable things a scammer needs, even above
oxygen it seems.
But it’s not just your average
scammer who is using AI to deceive. As with every technical evolution, there
are always those who will be figuring out ways to use the technology with a
level of malicious intent, sometimes this intent doesn’t have to be criminally
aligned, it’s sometimes done out of naivety.
And, here we are in 2023 and
there are artists who maybe as little as six months ago were happy to produce
work using a paintbrush together with at least a modicum of skill, who then
discovered gateway platforms to AI such as Chat GPT and the dozens of AI based
image generators and saw the easy way to an accelerated art career and the
potential riches.
Art is, and always has been at
a commercial level, about the numbers. The number of eyes you can get on an
artwork has a direct correlation with the number of sales that you are likely
to make, and the volume and consistency of art is an attractive proposition to
both galleries and search engines. If you can flood a platform with art, your
results are going to get seen. It’s the downside of ranking engines (usually a
relative of AI if not AI itself), and it’s also the downside of many online
marketplaces and print on demand platforms where there is no quality assurance
for the incoming works that are being uploaded and there’s often little to no
curation.
Maybe it’s the sales platforms
that governments should focus on through regulation even above talking about
regulating AI. AI being only a tool that is used to create what’s essentially,
and let’s call it what it is in this context, where the use of AI isn’t
disclosed, a fraudulent product that has just as many implications to global
markets than the overall risk that AI presents.
All sorts of things that are
perfectly legal can be used in illegal or illicit ways. It’s how those things
are used that is at the crux of almost every argument against AI that I have
heard to date, AI itself isn’t as yet smart enough to be overly worrisome, how
people are beginning to use it, is.
Which brings me on to the
artist using AI to deceive, and yes, those artists are becoming abundant in
number and bolder in their commission of what in some territories might even be
regarded as a crime in every other sense.
That might sound dramatic, but
here’s some context. An artist spins up the latest image generation app,
creates a text prompt, the work is then generated by the AI tool. If the artist
then claims this as their own work which is what we are currently seeing,
that’s not only misleading to buyers, but it is discourteous to artists who are
still spending hours and months creating work that comes from both their talent
and their heart.
You might argue then that all
artists should be using AI to level the playing field, but I’m pretty sure
that’s not really what the art buying public want or need, some will be
intrigued by the trendy nature of AI, but most art buyers will be far more
receptive to a work knowing that it has been created by the artist, and as we
know, art buyers tend to buy into the artist just as much as the art.
Artistic integrity lies at the
core of the art world, with the exception of the murkier elements that are less
than transparent, despite these elements being fewer than people would
initially think. When artists resort to using AI to generate their artwork
without disclosing its involvement, they compromise this artistic integrity
that underpins the non-murky parts of the art world. Art is a reflection of an
artist's unique perspective, skills, and emotions. By claiming AI-generated art
as their own, these artists devalue the essence of art and betray the trust of
their audience.
None of this is to suggest
that AI shouldn’t be used in the creative process, what I’m suggesting is that
you really shouldn’t disrespect those who are buying from you with any level of
deceit. If you do, it compromises more than the relationship you have with the
buyer, it also compromises the art world more broadly. AI can be useful, but it’s
use should be open, which may encourage new buyers, and it’s use should also be
responsible, not just ethically, but as I mentioned earlier, there are wider
issues at play including the future of art markets and more than that, the environmental
impact that AI presents.
Then there are the other
issues that are less contentious but they are issues that haven’t been
addressed and it might already be a little too late to put the toothpaste back
in the tube.
Using AI to generate artwork
without proper attribution raises significant concerns when it comes to
intellectual property rights and plagiarism. When artists present AI-generated
pieces as their original work, they will either naively or knowingly infringe
upon the rights of the AI developers or those who trained the algorithms, or
those who created the work that was then used in the training sets.
Intellectual property rights are crucial for fostering innovation, and artists
who cheat using AI not only disrespect these rights but also undermine the
rights of fellow artists, particularly those artists whose images have been
used to train the AI in the first place.
Back in Time by Mark Taylor - one of the earlier 2023 retro works, all hand painted, including the LCD details. Watches popular in the 80s and 90s and making a comeback today! |
There’s an impact we should care about…
The prevalence of AI-assisted
deception in the art world can have detrimental effects on the art market.
Collectors, galleries, and buyers rely on the authenticity and the uniqueness
of artworks when making purchasing decisions. If they discover that an artwork
they acquired was not genuinely created by the artist, it undermines their
trust in the art world, a world which, has historically seen some elements of
it fight against transparency to some extent, which alone means that the art
world as we know it today is already on some shaky ground.
This further undermining of
trust can lead to an even deeper loss of confidence from buyers and it is this
erosion of trust that has the potential to have lasting consequences for the
entire art ecosystem.
To some extent, some of this
behaviour can be understood, as an artist it pays to be entrepreneurial, in
fact, it’s encouraged, but the role we have within the art world is also one
where we should be educating buyers and encouraging them to support the art
world and us.
So, maybe an artists role is
evolving, perhaps it might become one where the artist becomes a
preservationist of an industry, perhaps it’s also a role that guides and
educates buyers to consider how, and what they purchase so that the longer term
art market remains viable for both buyers and artists and maybe there’s some
education needed in the damaging impact that AI could potentially have on the
environment if it’s used for low value short-term gain.
Nobody wins if the industry
becomes watered down, especially buyers who might be relying on the investment
opportunity that buying art presents. There is no reward for anyone if the
product can be self dispensed freely or more dramatically, aides the
advancement of global warming.
While AI has the potential to
enhance artistic expression, the misuse of this technology by artists who cheat
undermines the integrity of the art world. It is essential to address this
issue collectively by promoting ethics, transparency, and responsible use of AI
in art. By doing so, we can ensure that art remains a realm of genuine human
creativity, where artists are valued for their originality, skill, and the
emotional depth they bring to their work.
As an artist, it is crucial to
establish ethical guidelines and promote transparency. Artists should clearly
disclose the involvement of AI in the creation process, allowing viewers to
appreciate the work while providing them with an understanding the role
technology played. Additionally, art organisations and institutions can and
should be playing a vital role by implementing policies and standards that
promote honesty, accountability, and proper attribution.
Print on demand services
should insist on disclosure as a condition of hosting the work, something that
if not addressed could see them losing market share eventually, especially when
buyers begin to distrust the work they present. That too is an industry that
needs to take a close look at itself, if a POD company suggests that they
represent the work of living artists, there’s already a question around artists
who reuse out of copyright works, often created by long passed artists.
Until Next Time…
As an artist, I’m equally as
excited as I am worried about the future AI will play in the industry. There
will be broader benefits that will help independent artists run their
businesses much more efficiently, and even today AI has a role in testing
design choices, and running point on some of the low value, yet highly critical
work that we usually have to do.
Where I worry, is that artists
will naively go down the rabbit hole of AI without fully realising the impact
it could have on an industry or without realising the longer term impacts on
the planet, something few of us will initially think about when we create our
next funny cat masterpiece form a short text prompt. In my experience, short
term thinking is rarely good for business and the art world is at best, mostly
a slow burning candle.
AI isn’t going to go away
anytime soon, yet the shine is certainly dissipating. There is already some
chatter in the industry that we have been over supplied with AI models and some
people are quickly moving on.
The challenge though will be
those who spot the financial savings and begin to automate roles that would
once be filled by humans. The argument being that people will find new roles
that support the delivery of AI, but that’s not a role that the average worker
will fill, it takes years of training and experience to fill some of the
critical roles that the AI industry needs and when it comes to things like neuromorphic
computing, we’re already struggling to find the skills and there are very few
who can teach it.
So whilst I’m not dismissing
the future of AI, I am certainly dismissing the hype of AI creating a better
world. Governments are already on the hype train with an eye on increasing
their GDP by becoming global super hubs of AI innovation, but even with the
blunt instrument of regulation, we really have to ask, is AI really worth it
just to create your next masterpiece from a sentence of descriptive text. AI
replacing workers, well, we’re now seeing the likes of Amazon rethinking their
robotics and AI strategies, robots are efficient, but they’re certainly not
cheap.
So until next time, take care,
stay creative, and look after each other.
Mark
About Mark…
Mark is an artist who
specialises in vintage inspired works featuring technology. He is also known
for his landscape works and the occasional abstract, creating professionally
since the 1980s. He is also a specialist in secure computing environments and
is a globally recognised key note speaker.
You can purchase Mark’s work
through Fine Art America or his Pixels site here: https://10-mark-taylor.pixels.com You
can also purchase prints and originals directly. You can also view Mark’s
portfolio website at https://beechhousemedia.com
Join the conversation on Facebook at: https://facebook.com/beechhousemedia and Threads or connect on “X” (You realise it’s still Twitter) @beechhouseart
Comments
Post a Comment
Dear Readers, thanks for leaving a comment, and if you like what I'm doing, don't forget to subscribe at the top of the page and let your friends know I'm here!
Please do not leave links in comments, know that spam comments come here to meet their demise, and as always, be happy, stay safe, and always be creative!