Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
ABSTRACT
This study examines the authenticity of the artwork in the Gothica playing card Kickstarter project, addressing growing concerns among playing card collectors about the use of text-to-image[1] generative artificial intelligence (GenAI) in playing card designs. With the rise of artificial intelligence (AI) tools, deceitful Kickstarter playing card campaigns being launched and funded are becoming more frequent, hurting the collector community and damaging the integrity of the playing card industry as a whole. This study performs a visual analysis as well as a comparative analysis using algorithmic computer vision (CV) AI detection software Illuminarty, to examine the Gothica artwork, comparing it to AI-generated images, real photos, and artwork from other playing card projects. Our comparative analysis findings revealed that there is a highly significant (α = 0.05, t = -7.21, p-value = 0.0001) difference in the average AI probability ratios between Gothica artwork and other playing card artwork. Similarly, a highly significant difference (α = 0.05, t = -7.90, p-value = 0.0001) was observed when comparing the average AI probability ratios between Gothica artwork and photos. Combined with the visual analysis results, the study concludes that the Gothica artwork was AI-generated, rather than hand-illustrated as initially claimed by the project creator Nicolai Aarøe. As text-to-image GenAI tools are here to stay and are improving at imitating handmade imagery at a fast rate, this study highlights the need for collectors to be vigilant, think critically, and adopt AI detection methods to ensure the authenticity of artwork in their collections.
1 | INTRODUCTION
Kickstarter crowdfunding platform has been the go-to place for playing card enthusiasts and collectors for more than a decade. It’s easy to see why such is the case; Kickstarter offers a great opportunity for card lovers to support their favorite artists and acquire exclusive and otherwise hard-to-find playing cards for fair prices. Many Kickstarter playing card campaigns are wildly successful, reaching well over the minimum funding threshold of the project, or in the case of Vivid Kingdoms by the artist Peter Robinson (Ten Hundred), over $2,000,000. There is a significant incentive for playing card designers to market their products as unique, authentic, and exclusive to fund their projects quickly and gain a foothold in the highly competitive Kickstarter marketplace.
With the rise of generative artificial intelligence (GenAI) text-to-image[1] tools, the number of Kickstarter playing card project creators falsifying the source of artwork in their playing cards is rising accordingly. As artificial intelligence (AI) tools are improving day by day, the task of distinguishing unique and authentic artwork from one that is generated with AI is getting increasingly difficult as well. Cooke et al. (2024) reveal that humans struggle to distinguish between synthetic and authentic content across various media types, indicating the limitations of relying on human perceptual abilities alone to guard against deceptive synthetic media. Concurrently, Frank et al. (2023) conducted a representative study on human detection of artificially generated media across countries, further elucidating the global challenge posed by synthetic media in deceiving human perception. These studies collectively highlight the critical need for robust countermeasures that transcend human detection capabilities. For playing card enthusiasts and collectors who are looking to support starting out creators and acquire authentic decks for their collection, this task can almost seem impossible as they don’t know where to begin or what to look for.
On October 17, 2023, a Kickstarter playing card project Gothica by the creator The Other Self, who was verified through an automated process as Nicolai Aarøe, was launched. The theme in this project was set around fantastical mythological creatures and people of the gothic horror literature, recreated in a twisted and heavily stylistic aesthetic. The tuck box and card backs were created in a detailed ornamental style while the faces of the court cards depicted various characters in a more two-dimensional graphic style with a sepia tone palette. Comparing this artistic style to the rest of Nicolai’s portfolio, we don’t see it being used in any previous creative projects or artworks. Inspecting the Gothica artwork closely, we can notice areas that have a high probability of being signs of AI hallucination[2] rather than signs of conscious artistic, creative, or technical decisions that an artist or a designer would make.
During the funding stage of the campaign, the creator was confronted with these observations by several project backers in the campaign’s comment section. Reacting to these confrontations, Nicolai updated a sentence from ”All characters are hand illustrated” to “All characters are designed” and uploaded a work-in-progress (WIP) image sequence consisting of “sketch”, “expression”, “detailing” and “toning” stage images to the campaign page as proof that the artwork is authentic and handmade. After being confronted once again with observations that the images in the WIP image sequence are highly likely to be falsified, the designer made revisions to the “sketch” stage image and updated the campaign page with this change. This strange behaviour displayed by Nicolai raised suspicion among backers leading to many of them cancelling their pledges entirely.
In this study we examined the artwork in Nicolai Aarøe’s Gothica playing card Kickstarter campaign using the method of visual analysis and the method of comparative analysis utilizing algorithmic computer vision[3] (CV) AI detection software Illuminarty. For the latter method, we compared the artwork in Gothica playing card Kickstarter campaign to photos, artwork in other playing card projects as well as AI-generated images.
2 | STUDY DESIGN AND METHODOLOGY
2.1 | Visual analysis
We started by visually analyzing the artwork in the Gothica playing card Kickstarter campaign. The visuals were extracted from the campaign page by taking screenshots or directly downloading the AVIF files and converting them to JPEG format for compatibility with image reading software. The visuals were then digitally zoomed in, cropped, and straightened to allow for a better view during close-up inspection.
We used our knowledge of visual art and design principles, creative mediums, processes, and common visual signs of text-to-image AI hallucination, to identify and specify the areas that are highly likely to be signs of AI hallucination rather than conscious artistic, creative, or technical decisions made by an artist or a designer.
2.1.2 | Common visual signs of text-to-image GenAI hallucination
Anatomical Inconsistencies
AI-generated images often misrepresent complex details of human and animal anatomy. Anomalies in hand anatomy serve as a prime example of such inconsistencies. Bray, Johnson, and Kleinberg (2023) underscore the difficulty humans face in detecting 'deepfake' images of human faces, pointing to the sophistication of AI technologies in replicating human features, yet often faltering at intricate anatomical details.
Texture and Pattern Discrepancies
AI's capability to mimic textures and patterns frequently lacks coherence. This is evident in the seamless generation of synthetic content that, upon closer inspection, reveals incongruences in texture transitions and pattern alignments.
Lighting and Shadow Inconsistencies
Misaligned shadows and improper lighting are telltale signs of AI-generated imagery. These inconsistencies highlight the challenges AI faces in accurately simulating the physics of light, an aspect that human perception is particularly sensitive to.
Contextual Coherence Absence
AI-generated imagery often misses the logical consistency found in real-world settings, resulting in compositions of subjects, objects, and environments that defy the laws of physics, mathematics, and other quantitative sciences.
Element Repetition
The AI's propensity for pattern replication manifests in repeated elements within an image, signaling its synthetic origin.
Distorted Symbolism
Beyond textual anomalies, AI-generated symbols and logos may lack the consistency and clarity of human-designed counterparts, reflecting the AI's limitations in understanding and reproducing symbolic content.
Abstract Creativity and Its Limitations
The creativity exhibited by AI in abstract art or creative expressions often lacks the emotional depth and thematic coherence characteristic of human artistry, despite achieving visual appeal.
2.2 | Comparative analysis using algorithmic CV AI detection software
We used an algorithmic CV AI detection software Illuminarty to test four data sets that were collected with strict requirements to minimize skewness and bias. Illuminarty combines various CV algorithms to provide the likelihood of the image being generated from one of the public AI generation models. It presents an AI probability ratio which can be anywhere from 0 to 100 percent. The Illuminarty software is not perfect and does hallucinate - this characteristic is expected and is considered to be an ordinary occurrence when using any AI-based software. After prior testing of various AI detection software, we chose Illuminarty as it hallucinates the least and gives more accurate results.
We compared the artwork in Nicolai Aarøe’s Gothica playing card Kickstarter campaign to real photos, artwork in other playing card projects as well as AI-generated images, and analyzed the AI probability ratios of these data sets. We marked AI probability ratios of separate data points as well as average AI probability ratios of whole data sets in four different colors for easier observation and distinction: ratios 0-10 % (very low) are dark green, ratios 10-50 % (low) are light green, ratios 50-90 % (high) are light red and ratios 90-100 % (very high) are dark red.
2.2.1 | Data sets
Data set (A): 30 random photos from the Unsplash public image asset library
Data set (B): 30 random artwork screenshots from 10 playing card projects
Data set (C): 30 random AI-generated images from the Midjourney showcase webpage
Data set (D): 15 random artwork screenshots from the Gothica Kickstarter campaign
2.2.2 | Data set requirements
Screenshots
Data points must be random (to the extent of plausible limitations) and must not contain any photo manipulation or editing except for resizing or cropping. Pixel resolution must be at least 250 kilopixels or 500 x 500 pixels as smaller pixel resolution may introduce skewness. Design elements such as suit and rank indicators which are not integrated as part of the artwork must be cropped out as obvious handmade design elements may introduce skewness. File size cannot not exceed 3 megapixels as it is a hard limit of the Illuminarty AI detection software.
Photos
Data points must be random (to the extent of plausible limitations) and must not contain any AI-generated content or post-download photo manipulation or editing. Pixel resolution must be at least 250 kilopixels or 500 x 500 pixels as smaller pixel resolution may introduce skewness. File size cannot not exceed 3 megapixels as it is a hard limit of the Illuminarty AI detection software.
AI-generated images
Data points must be random (to the extent of plausible limitations), AI-generated and must not contain any post-download photo manipulation or editing. Pixel resolution must be at least 250 kilopixels or 500 x 500 pixels as smaller pixel resolution may introduce skewness. File size cannot not exceed 3 megapixels as it is a hard limit of the Illuminarty AI detection software.
2.2.3 | Variables
Independent variable
Data points from data sets (A),( B), (C) and (D)
Dependent variable
AI probability ratio (0-100%) of an algorithmic CV AI detection software Illuminarty
3 | EXPECTATIONS AND HYPOTHESES
3.1 | Expectations
Data set (A) is expected to show a low average AI probability ratio: the photos acquired from the Unsplash public image asset library must follow the submission guidelines of the agency which do not allow screenshots, in-game captures, composite art, or any content created with GenAI text-to-image models.
Data set (B) is expected to show a low average AI probability ratio: the screenshots acquired from various playing card projects consist of artwork made by reputable artists and designers who are transparent in their creative process and show a consistent artistic style and technical skill in their portfolio both before and after the GenAI text-to-image models became public.
Data set (C) is expected to show a high average AI probability ratio: the images acquired from the Midjourney showcase webpage are created with a public GenAI text-to-image model Midjourney.
Data set (D) is expected to show a high average AI probability ratio: the screenshots acquired from the Gothica playing card Kickstarter campaign contain artwork with visually identified and specified areas that are highly likely to be signs of AI hallucination rather than conscious artistic, creative, or technical decisions made by an artist or a designer. Furthermore, the creator of the campaign displays an inconsistent artistic style and technical skill in their portfolio after the GenAI text-to-image models became public.
3.2 | Hypotheses
In these hypotheses, the study performs a comparative analysis (with a level of significance (α) = 0.05) on imagery samples to observe the difference of average AI probability ratios among them using a t-test two-sample independent for mean. The goal of these hypotheses is to validate the findings of the study. The hypotheses are as follows:
H0: There is no significant difference in the average AI probability ratios between Gothica artwork and other playing card artwork and photos.
H1: There is a significant difference in the average AI probability ratios between Gothica artwork and other playing card artwork and photos.
4 | ANALYSIS AND FINDINGS
4.1 | Illuminarty performance
The average AI probability ratios of data sets (A),( B), (C), and (D) are as follows: data set (A) - 9.86 %, data set (B) - 13.90 %, data set (C) - 60.76 %, data set (D) - 72.59 %.
Data set (A) shows 1 data point with a high AI probability ratio (50-90 %) from a sample size of 30. Data set (B) shows 3 data points with a high AI probability ratio (50-90 %) from a sample size of 30. Data set (C) shows 9 data points with a low AI probability ratio (10-50 %) and 2 data points with a very low AI probability ratio (0-10 %) from a sample size of 30. The outlier AI probability ratios of these data points are considered to be a result of plausible and expected AI hallucination characteristic of Illuminarty software as other known factors were minimized by imposing strict data set requirements and the origin of imagery (either AI-generated or handmade) from data sets (A), (B), and (C) is known with a high degree of certainty.
The extent to which the illuminarty software hallucinates differs depending on the origin of the imagery being tested. Data set (A) shows a small discrepancy between the logical average AI probability ratio of photos (0 %) and the Illuminarty result (9.86 %) with a difference of 9.86 %. However, the data set (C) shows a noticeably larger discrepancy between the logical average AI probability ratio of AI-generated images (100 %) and the Illuminarty result (60.76 %) with a difference of 39.24 %. This means that the Illuminarty software is much more likely to confidently misjudge AI-generated imagery giving it a low AI probability ratio rather than confidently misjudging handmade imagery giving it a high AI probability ratio. Inferentially, this means that in the context of the binary origin of imagery (either AI-generated or handmade), a high average AI probability ratio (50-90 %) of a data set, even on the lower end, should be interpreted as an extremely strong indication of data set’s AI-generated origin.
4.2 | Findings
Photos and other playing card artwork show very low and low average AI probability ratios respectively with a difference of 4.04 % among them. There is no notable difference between the results of these data sets; average AI probability ratios are as expected. AI-generated imagery and Gothica artwork show high average AI probability ratios with a difference of 11.83 % among them. There is a small notable difference between the results of these data sets which may correlate to a smaller sample size of the Gothica artwork data set; average AI probability ratios are as expected.
Given the results of our algorithmic CV AI detection test, is there a significant difference in the average AI probability ratios between Gothica artwork and other playing card artwork and photos? Can we know if the artwork in Gothica is AI-generated or handmade? To answer these questions we performed a t-test two-sample independent for mean. We found that there is a highly significant (α = 0.05, t = -7.21, p-value = 0.0001) difference in the average AI probability ratios between Gothica artwork and other playing card artwork. Similarly, a highly significant difference (α = 0.05, t = -7.90, p-value = 0.0001) is observed when comparing the average AI probability ratios between Gothica artwork and photos.
In the first method, we visually analyzed the artwork in the Gothica playing card Kickstarter campaign, identifying and specifying the areas that are highly likely to be signs of AI hallucination rather than conscious artistic, creative, or technical decisions made by an artist or a designer. In the second method, we used an algorithmic CV AI detection software Illuminarty to test four data sets for average AI probability ratios. We then performed a comparative analysis (with a level of significance (α) = 0.05) using a t-test two-sample independent for mean, which allowed us to reject the H0 and accept the H1 which states that there is a significant difference in the average AI probability ratios between Gothica artwork and other playing card artwork and photos.
In the context of the binary origin of imagery tested and analyzed in this study, the significant difference in the average AI probability ratios between Gothica artwork and other playing card artwork and photos is deemed to be an extremely strong indication of Gothica artwork’s origin being AI-generated. Combined with the results of the visual analysis method, the study concludes that the artwork in Nicolai Aarøe’s Gothica playing card Kickstarter campaign is AI-generated using one of the public AI generation models.
5 | LIMITATIONS, CONCLUSION AND DISCUSSION
5.1 | Limitations
As outlined in Section 2.1, the visual analysis method requires extensive contextual knowledge of art and design principles to make correct assumptions when analyzing potentially AI-generated imagery. Therefore, this method may prove inaccurate or ineffective if used without the relevant competence. In Sections 2.2 and 4.1, a significant limitation of the Illuminarty software was outlined. While being the most accurate algorithmic CV AI detection software we tested, it does hallucinate and does so to varying degrees depending on the origin of the imagery being tested. Section 4.1 suggests a solution to this limitation, pointing to the fact that the varying degrees to which Illuminarty hallucinates can be predicted and accurately interpreted. Section 3.1 mentions a limitation of randomness in sample data points; the randomness of data points was affected by the internal algorithms of the websites they were collected from. For the Gothica artwork data set, the randomness of data points was significantly limited by the amount of suitable imagery displayed on the Kickstarter campaign page. For the same reason, the Gothica artwork data set has a sample size of 15 data points instead of 30.
5.2 | Conclusion and discussion
Playing card enthusiasts and collectors care about the quality of playing cards they’re buying. They expect the playing card artists to be honest and transparent and the artwork they create to be authentic and genuine. Many playing card collectors consider their collections to consist of valuable art pieces, which the aftermarket prices often reflect. Therefore, the integrity of the playing card industry is vital for the collector community.
Kickstarter is often credited for the revival and popularization of the playing card collecting hobby. However, the landscape of playing card campaigns on Kickstarter is shifting. With the rise of text-to-image GenAI tools, deceitful campaigns being launched and funded is becoming more frequent. Creators of such campaigns employ social engineering and visual falsification methods to conceal the synthetic origin of their artwork and portray it as unique and handmade. In the case of the Gothica campaign, Nicolai Aarøe falsified the WIP image sequence of their artwork to minimize further suspicion of text-to-image GenAI involvement in their project. Kickstarter itself rarely enforces its AI policy rules or takes action against such campaigns, leaving project backers without any safeguards against deceitful creators and false advertising.
Notably, text-to-image GenAI tools are here to stay and are rapidly improving at imitating handmade artwork. For playing card collectors this means that it’s more important than ever before to be vigilant, observant, and critical when browsing Kickstarter campaigns or evaluating new potential additions to their collections. Adopting AI detection methods is also a key component as some playing card designers are willing to go to great lengths to deceive and profit from unsuspecting collectors by disguising quickly made AI-generated imagery as original handmade artwork.
Terminology:
[1] Text-to-image AI uses natural language processing (NLP) to convert the text description into a machine-readable format. Once converted into a machine-readable format, the machine learning model is trained on a massive dataset of text and images, learns to identify patterns, and to uses them to generate new images.
[2] AI hallucination (also called confabulation or delusion) refers to the ability of AI models to generate content that is not based on any real-world data, but rather is a product of the model’s own imagination. In the context of text-to-image models, this means producing images with objects, features, or themes that were not specified in the text prompt and don't logically derive from it.
[3] Computer vision is a field of AI that uses machine learning and neural networks to teach computers and systems to derive meaningful information from digital images, videos and other visual inputs—and to make recommendations or take actions when they see defects or issues.
References:
Google cloud (2024) Create images from text without writing a single line of code [online] Available at: https://cloud.google.com/use-cases/text ... w-it-works
Banafa, A. (2023). Artificial Intelligence Hallucinations [online] Available at: https://www.bbvaopenmind.com/en/technol ... ucinations
IBM (2024) What is computer vision? [online] Available at: https://www.ibm.com/topics/computer-vision
Mahir, A. (2024) A Hallucination of AI: How to Detect Images Generated by AI [online] Available at: https://www.linkedin.com/pulse/hallucin ... ahir-jwnne
Gray, R. (2024) How to spot a manipulated image [online] Available at: https://www.bbc.com/future/article/2024 ... ated-image
Appendix:
Data sets for comparative analysis:
https://drive.google.com/drive/folders/ ... sp=sharing
algorithmic CV AI detection software Illuminarty:
https://app.illuminarty.ai/
Methodology presentation:
https://drive.google.com/drive/folders/ ... drive_link
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Visual analysis result link as there is an attachment limit on the forum: https://drive.google.com/file/d/1Kc5nsb ... sp=sharing
- montenzi
- ✔ VERIFIED Designer
- Posts: 1305
- Joined: Mon Oct 03, 2016 4:40 pm
- Location: New Zealand
- Has thanked: 846 times
- Been thanked: 1778 times
- Contact:
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
I haven't read this post carefully, but the first thing I did was test some of my decks. The results were quite funny!
I used https://app.illuminarty.ai/ to test, right?
First, I tested the cards from my 100% AI project. Well, with 10 cards, the average result was well below 10%! In some cases, it was just 4-5%.
One of the new AI project - 10%
Old AI tests- 25-35%
Random AI images - 80%
Then I tested the Legendarium tarot. The average result was 40-45%, and sometimes, it was 50-60%.
Hello Tiki - 10-25%
EV - in some cases - 40%
Varius - 10%
Some random 100% hand-drawn decks scored between 30% and 50% for some cards
I don't know. I'm most confused by the fact that the true AI project was rated below 10%. Based on my very quick tests, this automatic system can be easily tricked.
I used https://app.illuminarty.ai/ to test, right?
First, I tested the cards from my 100% AI project. Well, with 10 cards, the average result was well below 10%! In some cases, it was just 4-5%.
One of the new AI project - 10%
Old AI tests- 25-35%
Random AI images - 80%
Then I tested the Legendarium tarot. The average result was 40-45%, and sometimes, it was 50-60%.
Hello Tiki - 10-25%
EV - in some cases - 40%
Varius - 10%
Some random 100% hand-drawn decks scored between 30% and 50% for some cards
I don't know. I'm most confused by the fact that the true AI project was rated below 10%. Based on my very quick tests, this automatic system can be easily tricked.
Montenzi.NZ Instagram: @montenzi
- GandalfPC
- Moderator
- Posts: 4699
- Joined: Fri Jul 02, 2021 12:01 pm
- Cardist: Yes
- Collector: Yes
- Player: Yes
- Magician: Yes
- White Whale: Ambergris
- Decks Owned: 1700
- Location: New Mexico
- Has thanked: 7501 times
- Been thanked: 4366 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
AI based AI detectors are less than reliable it would seem.
If a human can’t tell, don’t trust a robot - and if they can - you didn’t need the robot.
If a human can’t tell, don’t trust a robot - and if they can - you didn’t need the robot.
- GandalfPC
- Moderator
- Posts: 4699
- Joined: Fri Jul 02, 2021 12:01 pm
- Cardist: Yes
- Collector: Yes
- Player: Yes
- Magician: Yes
- White Whale: Ambergris
- Decks Owned: 1700
- Location: New Mexico
- Has thanked: 7501 times
- Been thanked: 4366 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
I also wonder if those AI’s - generators and detectors alike - were created by stealing art from Nicolai, as his stuff was out there for the stealing at the time and might explain their heavy claim of ownership over them
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Yes, the app link is correct.montenzi wrote: ↑Tue Sep 03, 2024 8:35 am I haven't read this post carefully, but the first thing I did was test some of my decks. The results were quite funny!
I used https://app.illuminarty.ai/ to test, right?
First, I tested the cards from my 100% AI project. Well, with 10 cards, the average result was well below 10%! In some cases, it was just 4-5%.
One of the new AI project - 10%
Old AI tests- 25-35%
Random AI images - 80%
Then I tested the Legendarium tarot. The average result was 40-45%, and sometimes, it was 50-60%.
Hello Tiki - 10-25%
EV - in some cases - 40%
Varius - 10%
Some random 100% hand-drawn decks scored between 30% and 50% for some cards
I don't know. I'm most confused by the fact that the true AI project was rated below 10%. Based on my very quick tests, this automatic system can be easily tricked.
Could you share the visuals you were testing? Everything you mentioned is outlined in the study. I compared data set averages instead of single data points exactly for this reason - the Illuminarty software does hallucinate just like any other AI-based software.
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Illuminarty is the most reliable I've found so far, although it is definitely not perfect. I think the combination of both human and cumputer analysis works better than a single method.
The Gothica artwork has an entirely different style than the rest of Nicolai's portfolio, so I doubt it is relevant in this case.
- Adamthinks
- Member
- Posts: 1303
- Joined: Mon Jan 25, 2021 3:36 am
- Collector: Yes
- Player: Yes
- Decks Owned: 500
- Location: Seattle
- Has thanked: 2550 times
- Been thanked: 1071 times
- Contact:
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Frankly I don't think these AI "detectors" work at all, no matter what sort of human analysis you add to it. I certainly don't think making accusations based on that is useful in the slightest.
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Could you elaborate?Adamthinks wrote: ↑Tue Sep 03, 2024 6:51 pm Frankly I don't think these AI "detectors" work at all, no matter what sort of human analysis you add to it.
- Adamthinks
- Member
- Posts: 1303
- Joined: Mon Jan 25, 2021 3:36 am
- Collector: Yes
- Player: Yes
- Decks Owned: 500
- Location: Seattle
- Has thanked: 2550 times
- Been thanked: 1071 times
- Contact:
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
I thought that was pretty clear. Those AI detectors don't work. Analyzing the data they provide doesn't add anything to their effectiveness.CrystalDrug wrote: ↑Wed Sep 04, 2024 6:38 amCould you elaborate?Adamthinks wrote: ↑Tue Sep 03, 2024 6:51 pm Frankly I don't think these AI "detectors" work at all, no matter what sort of human analysis you add to it.
- NIAVLYSYUG
- Member
- Posts: 35
- Joined: Wed Dec 27, 2023 8:18 am
- Collector: Yes
- Player: Yes
- White Whale: Sanctus from Lotrek
- Decks Owned: 33
- Location: Canada, Quebec
- Has thanked: 70 times
- Been thanked: 26 times
- Contact:
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Knowing whether a work was generated by artificial intelligence is a somewhat useless debate, this debate a storm in a teacup.
If I like a graphic work, I buy it...
Artificial intelligence used for the creation of graphic images is a blessing, because it is accessible to everyone and allows us to put into images scenarios, images that inhabit each of us, while we would be unable to do so without a classic mastery of existing and limited tools in the graphic expression of our thoughts.
If I like a graphic work, I buy it...
Artificial intelligence used for the creation of graphic images is a blessing, because it is accessible to everyone and allows us to put into images scenarios, images that inhabit each of us, while we would be unable to do so without a classic mastery of existing and limited tools in the graphic expression of our thoughts.
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
It's not clear to me in the slightest. How did you come to this conclusion?Adamthinks wrote: ↑Wed Sep 04, 2024 8:57 amI thought that was pretty clear. Those AI detectors don't work. Analyzing the data they provide doesn't add anything to their effectiveness.CrystalDrug wrote: ↑Wed Sep 04, 2024 6:38 amCould you elaborate?Adamthinks wrote: ↑Tue Sep 03, 2024 6:51 pm Frankly I don't think these AI "detectors" work at all, no matter what sort of human analysis you add to it.
We can discuss the effectiveness of the Illuminarty software, as I did so extensively in my study but your statement "AI detectors don't work" is simply not true.
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
It is evident that this debate is relevant and important. People who vote with their wallets and are willing to cancel their pledges once they learn the true origin of the artwork in a Kickstarter project are a perfect example. Microstock agencies that invest substantial amounts of money into AI detection software are great examples too.NIAVLYSYUG wrote: ↑Wed Sep 04, 2024 9:14 am Knowing whether a work was generated by artificial intelligence is a somewhat useless debate, this debate a storm in a teacup.
If I like a graphic work, I buy it...
Not when you market your AI-generated images as authentic and genuine handmade artwork.NIAVLYSYUG wrote: ↑Wed Sep 04, 2024 9:14 am Artificial intelligence used for the creation of graphic images is a blessing, because it is accessible to everyone and allows us to put into images scenarios, images that inhabit each of us, while we would be unable to do so without a classic mastery of existing and limited tools in the graphic expression of our thoughts.
- Adamthinks
- Member
- Posts: 1303
- Joined: Mon Jan 25, 2021 3:36 am
- Collector: Yes
- Player: Yes
- Decks Owned: 500
- Location: Seattle
- Has thanked: 2550 times
- Been thanked: 1071 times
- Contact:
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
It's clearly true, as Montenzi himself just demonstrated above and has been demonstrated a multitude of times elsewhere with text and other media. Not sure why you're putting so much trust in the software.CrystalDrug wrote: ↑Wed Sep 04, 2024 10:55 amIt's not clear to me in the slightest. How did you come to this conclusion?Adamthinks wrote: ↑Wed Sep 04, 2024 8:57 amI thought that was pretty clear. Those AI detectors don't work. Analyzing the data they provide doesn't add anything to their effectiveness.CrystalDrug wrote: ↑Wed Sep 04, 2024 6:38 amCould you elaborate?Adamthinks wrote: ↑Tue Sep 03, 2024 6:51 pm Frankly I don't think these AI "detectors" work at all, no matter what sort of human analysis you add to it.
We can discuss the effectiveness of the Illuminarty software, as I did so extensively in my study but your statement "AI detectors don't work" is simply not true.
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Please provide links to these comprehensive demonstrations showing that the Illuminarty software doesn't work, I'd like to see them. I also wouldn't call Montenzi's comment a "demonstration". He did a quick test and wrote a quick summary of that test, I haven't seen the visuals that were tested nor read about the data set requirements that were put in place in order to minimize skewness and bias.Adamthinks wrote: ↑Wed Sep 04, 2024 1:05 pm It's clearly true, as Montenzi himself just demonstrated above and has been demonstrated a multitude of times elsewhere with text and other media. Not sure why you're putting so much trust in the software.
- montenzi
- ✔ VERIFIED Designer
- Posts: 1305
- Joined: Mon Oct 03, 2016 4:40 pm
- Location: New Zealand
- Has thanked: 846 times
- Been thanked: 1778 times
- Contact:
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
Sorry, I'm a little too busy to continue this discussion. I have my opinions on everything mentioned here and there, but it doesn't matter.
Yesterday, I ran some other tests and was able to reduce the percentage by 60%. Again, it doesn't matter. I know how to manipulate this system—sorry, no instructions! It doesn't work every time, though.
My interest is still strong, and my AI project has been delayed for more than a year. It was far from what I envisioned at the beginning, but it's now close to answering my question—can AI be used for designing playing cards? Can AI be useful? Now I can say yes, sometimes. But most of the time, it's much easier to draw by hand! So, I call it "my weird academic study." Or, how to force robots to do your job.
And by the way, AI is not just text-to-image. You can't get decent results by relying solely on that.
P.S. My tests weren't 100% accurate, as the percentage is much higher when I post just an image, not a playing card. However, in other cases, it was pure AI images with some modifications and a low percentage. Such tools can be helpful for a basic understanding of the possible (!!!) sources of artwork, but use them with great caution.
UPDATE: When I read the GandalfPC post below, I immediately tested one more thing, and you can try the same—just change your image to black and white, and it will instantly reduce the percentage. In my test case, it was 77% before vs. 15% for the B&W version. If I reduce saturation by 50%, it's 53%, for -75% saturation, it's 25%
Yesterday, I ran some other tests and was able to reduce the percentage by 60%. Again, it doesn't matter. I know how to manipulate this system—sorry, no instructions! It doesn't work every time, though.
My interest is still strong, and my AI project has been delayed for more than a year. It was far from what I envisioned at the beginning, but it's now close to answering my question—can AI be used for designing playing cards? Can AI be useful? Now I can say yes, sometimes. But most of the time, it's much easier to draw by hand! So, I call it "my weird academic study." Or, how to force robots to do your job.
And by the way, AI is not just text-to-image. You can't get decent results by relying solely on that.
P.S. My tests weren't 100% accurate, as the percentage is much higher when I post just an image, not a playing card. However, in other cases, it was pure AI images with some modifications and a low percentage. Such tools can be helpful for a basic understanding of the possible (!!!) sources of artwork, but use them with great caution.
UPDATE: When I read the GandalfPC post below, I immediately tested one more thing, and you can try the same—just change your image to black and white, and it will instantly reduce the percentage. In my test case, it was 77% before vs. 15% for the B&W version. If I reduce saturation by 50%, it's 53%, for -75% saturation, it's 25%
Montenzi.NZ Instagram: @montenzi
- GandalfPC
- Moderator
- Posts: 4699
- Joined: Fri Jul 02, 2021 12:01 pm
- Cardist: Yes
- Collector: Yes
- Player: Yes
- Magician: Yes
- White Whale: Ambergris
- Decks Owned: 1700
- Location: New Mexico
- Has thanked: 7501 times
- Been thanked: 4366 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
The “Known Issues” for the software from the maker seem to leave plenty of room
- rousselle
- Site Admin
- Posts: 4898
- Joined: Thu Aug 01, 2013 11:35 pm
- Collector: Yes
- Player: Yes
- Magician: Yes
- Has thanked: 7727 times
- Been thanked: 2632 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
I really do enjoy reading a good-faith conversation about taking a scientific approach to evaluating an interesting claim.
I think CrystalDrug's analysis is vigorous and done in good faith. It's certainly reasonable to question the veracity of the tool he used. But... does anybody have any more information on the degree to which the tool has been established to be reliable (or not)? He provides his own samples that he used to test its veracity and explains why he finds it reasonably accurate. Is his sample set large enough? Has anyone else posted a similar test?
I trust montenzi's evaluation as far as it goes, but I don't find it to be an overwhelming refutation of the tool's efficacy. He does raise some valid concerns, however. But... has anybody out there performed anything along the lines of a similar test?
What other decks / sources would be good candidates to use as part of a test?
Like montenzi, I don't currently have enough time available to dive further into this myself, but I am enjoying reading the conversation when I have a spare moment, and thank you all for taking this approach to to the question!
I think CrystalDrug's analysis is vigorous and done in good faith. It's certainly reasonable to question the veracity of the tool he used. But... does anybody have any more information on the degree to which the tool has been established to be reliable (or not)? He provides his own samples that he used to test its veracity and explains why he finds it reasonably accurate. Is his sample set large enough? Has anyone else posted a similar test?
I trust montenzi's evaluation as far as it goes, but I don't find it to be an overwhelming refutation of the tool's efficacy. He does raise some valid concerns, however. But... has anybody out there performed anything along the lines of a similar test?
What other decks / sources would be good candidates to use as part of a test?
Like montenzi, I don't currently have enough time available to dive further into this myself, but I am enjoying reading the conversation when I have a spare moment, and thank you all for taking this approach to to the question!
This space intentionally left blank.
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
That's understandable. If you do find the time, I'd be happy to continue the discussion. It would be interesting to see your tests and how they were performed as I haven't encountered such high degrees of hallucination when doing my initial tests or doing the tests for the study. I did find, however, that certain specific artstyles in AI-generated imagery don't give accurate results and show low or very low AI probability ratios.montenzi wrote: ↑Wed Sep 04, 2024 2:58 pm Sorry, I'm a little too busy to continue this discussion. I have my opinions on everything mentioned here and there, but it doesn't matter.
Yesterday, I ran some other tests and was able to reduce the percentage by 60%. Again, it doesn't matter. I know how to manipulate this system—sorry, no instructions! It doesn't work every time, though.
My interest is still strong, and my AI project has been delayed for more than a year. It was far from what I envisioned at the beginning, but it's now close to answering my question—can AI be used for designing playing cards? Can AI be useful? Now I can say yes, sometimes. But most of the time, it's much easier to draw by hand! So, I call it "my weird academic study." Or, how to force robots to do your job.
And by the way, AI is not just text-to-image. You can't get decent results by relying solely on that.
P.S. My tests weren't 100% accurate, as the percentage is much higher when I post just an image, not a playing card. However, in other cases, it was pure AI images with some modifications and a low percentage. Such tools can be helpful for a basic understanding of the possible (!!!) sources of artwork, but use them with great caution.
UPDATE: When I read the GandalfPC post below, I immediately tested one more thing, and you can try the same—just change your image to black and white, and it will instantly reduce the percentage. In my test case, it was 77% before vs. 15% for the B&W version. If I reduce saturation by 50%, it's 53%, for -75% saturation, it's 25%
To address your tests, that's why I had the data set requirements. For playing cards, I cropped the images to only show the artwork, all the handmade design elements were cropped out. You will most likely get a very different result if you test, say, a mockup with a playing card rather than just the artwork in that playing card. The significant limitation of the Illuminarty software was explained in the study and I also suggested a solution to this limitation, pointing to the fact that the varying degrees to which Illuminarty hallucinates can be predicted and accurately interpreted for our purposes. the Illuminarty software is much more likely to confidently misjudge AI-generated imagery giving it a low AI probability ratio rather than confidently misjudging handmade imagery giving it a high AI probability ratio.
- GandalfPC
- Moderator
- Posts: 4699
- Joined: Fri Jul 02, 2021 12:01 pm
- Cardist: Yes
- Collector: Yes
- Player: Yes
- Magician: Yes
- White Whale: Ambergris
- Decks Owned: 1700
- Location: New Mexico
- Has thanked: 7501 times
- Been thanked: 4366 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
This guy’s testing found that particular detector was the only one tested that identified human taken photo as high percent chance AI generated…
https://www.makeuseof.com/ai-image-dete ... racy-test/
https://www.makeuseof.com/ai-image-dete ... racy-test/
- CrystalDrug
- Member
- Posts: 52
- Joined: Sun Aug 11, 2024 2:45 am
- Collector: Yes
- Player: Yes
- Decks Owned: 60
- Location: Lithuania
- Has thanked: 27 times
- Been thanked: 38 times
Re: Artificial intelligence cannot draw: Detecting text-to-image GenAI imagery in a Kickstarter playing card project
That's an interesting test, although a sample size of only 1 data point for handmade imagery seems very unreliable.GandalfPC wrote: ↑Thu Sep 05, 2024 9:59 am This guy’s testing found that particular detector was the only one tested that identified human taken photo as high percent chance AI generated…
https://www.makeuseof.com/ai-image-dete ... racy-test/
The Hive moderation tool looks interesting and seems to be very good at detecting unedited Midjourney AI-generated imagery but is more inconsistent with artwork screenshots. I ran a quick test using the same data sets I used for this study and the average AI probability ratio results were as follows:
Data set (A): 30 random photos from the Unsplash public image asset library: 2.5 %
Data set (B): 30 random artwork screenshots from 10 playing card projects: 4.73 %
Data set (C): 30 random AI-generated images from the Midjourney showcase webpage: 99.68 %
Data set (D): 15 random artwork screenshots from the Gothica Kickstarter campaign: 54.98 %
Who is online
Users browsing this forum: Google [Bot] and 0 guests