• Generative Art
  • Art Analytics
  • Right Click Save
    • About
    • Advisory Board
    • Press
    • Contact
    • Art Authentication
    • Community
  • Blog
Menu

Artnome

100 State Street
Framingham, MA, 01702
Phone Number
Exploring Art Through Data

Your Custom Text Here

Artnome

  • Generative Art
  • Art Analytics
  • Right Click Save
  • About
    • About
    • Advisory Board
    • Press
    • Contact
    • Art Authentication
    • Community
  • Blog

Blog

Exploring art through data using the Artnome database. 

 

50 Famous Artists Brought to Life With AI

May 21, 2019 Jason Bailey
Pablo Picasso with Jacqueline Roque

Pablo Picasso with Jacqueline Roque

I’m working on a longer article about democratizing AI for artists, but in the process of writing that article, I started using Runway ML and Jason Antic’s deep learning project DeOldify to colorize old black-and-white photos of artists - I couldn’t stop. So I decided to share an “eye candy” article as a preview of my longer piece.

When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.

That distance begins to collapse a bit and they come to life. The Picasso photo above, for example, always made me think of him as a this cool guy who hung out in his underwear all the time. But the colorized version makes him seem a bit frail and weak, and maybe even a tinge creepy.

Photos of artist couples, in general, seem to really hammer home their humanity. I think it is because so many photos of artists seem staged or posed. But when we catch them with their spouse or lover, they are their relaxed selves for a candid moment. You can almost imagine inviting them over to play cards.

Lee Krasner and Jackson Pollock

Lee Krasner and Jackson Pollock

Joan Miro and Pilar Juncosa

Joan Miro and Pilar Juncosa

Alfred Stiglietz and Georgia O’Keefe

Alfred Stiglietz and Georgia O’Keefe

William and Elaine De Kooning

William and Elaine De Kooning

Other photos feel even more magical and distant after the deep learning auto colorization. The image below with Frida Kahlo crouching next to a deer, for example (my favorite), feels somewhat otherworldly. Likewise with the famous photo of Salvador Dali flying through the air and Yayoi Kusama photographed with her spotted horse.

Frida Kahlo

Frida Kahlo

Salvador Dali

Salvador Dali

Yayoi Kusama

Yayoi Kusama

I also really enjoy watching the algorithm trying to figure out how to colorize the actual paintings from the artists. It turns James Ensor into a bit of a pale zombie, but has him painting in a neon palette fit for a blacklight. Kandinsky’s palette shifts almost entirely to purples and blues and takes on an almost tribal feel.

James Ensor

James Ensor

Wassily Kandinsky

Wassily Kandinsky

Jackson Pollock

Jackson Pollock

Georges Braque

Georges Braque

Raul Houseman and Hanna Hoch

Raul Houseman and Hanna Hoch

There are some known issues with AI and machine learning being overly trained on white people, and thus struggling with properly representing people with nonwhite skin tones. You can see this a bit in the colorized image of Picasso in which he appears more pale than the olive/bronze tone we are used to seeing in known color photos. Jason Antic, the developer working on DeOldify takes this very seriously, and just yesterday tweeted the following:

On the question of skin tone bias in DeOldify: We take this issue seriously, and have been digging into it. We're going to be overhauling the dataset to make sure it's driving more accurate decisions in the next few weeks. Overall, there seems to be a red bias in everything.

"Everything" includes ambiguity in general object detection. It can even be seen in Caucasian colorings (see example below- left is original, right is DeOldify). So overall it seems that DeOldify needs a better dataset than just ImageNet. We're prioritizing it -now-.

Example of bias toward red from Jason Antic Twitter stream

Example of bias toward red from Jason Antic Twitter stream

That being said, it's a challenging problem and it'll be hard to verify. Everybody is different, so making this work for all cases will probably be something we'll need years to perfect. So please keep an eye out for updates and keep trying it out for yourself!

That said, it does a reasonable job of not “whitewashing” non-Caucasian artists, considering it has a ways to go before perfecting skin tones, regardless of color.

Jacob Lawrence

Jacob Lawrence

Alma Thomas

Alma Thomas

Wifredo Lam

Wifredo Lam

Rufino Tamayo

Rufino Tamayo

Aaron Douglas

Aaron Douglas

Kazuo Shiraga

Kazuo Shiraga

Isamu Noguchi

Isamu Noguchi

The algorithm also seemed to struggle with the oldest images. This makes sense to me given there is less fidelity, and therefore, less input to guide the algorithm. With less guidance the algorithm sometimes has to get creative as with the Monet photo below.

Claude Monet

Claude Monet

Paul Cézanne

Paul Cézanne

August Rodin

August Rodin

Paul Gauguin

Paul Gauguin

Many of my favorite DeOldified photos are the ones that show artists we are familiar with but either rarely or never have seen photographed in color.

Egon Schiele

Egon Schiele

Gustave Klimt

Gustave Klimt

Edvard Munch

Edvard Munch

Leonora Carrington

Leonora Carrington

Hilma af Klint

Hilma af Klint

Piet Mondrian

Piet Mondrian

Henri Matisse

Henri Matisse

I also really enjoy the photos of the artists in their studios and at work. This younger looking Francis Bacon photo is among the most convincingly colorized photos in the batch I converted.

Francis Bacon

Francis Bacon

Alice Neal

Alice Neal

Agnes Martin

Agnes Martin

Helen Frankenthaler

Helen Frankenthaler

Robert Motherwell

Robert Motherwell

Bridget Riley

Bridget Riley

Louise Bourgeois

Louise Bourgeois

Barbara Hepworth

Barbara Hepworth

Hans Hofmann

Hans Hofmann

Mark Rothko

Mark Rothko

Jean Dubuffet

Jean Dubuffet

Giorgio de Chirico

Giorgio de Chirico

Frank Auerbach

Frank Auerbach

Alberto Giacometti

Alberto Giacometti

Eva Hesse

Eva Hesse

Louise Nevelson

Louise Nevelson

Hope you enjoyed seeing these artists in color as much as I did. In the next article, the one I set out to write when I got distracted, I will go into more detail on Runway ML and how it is making these remarkable new AI tools accessible to everyday artists and designers.

1 Comment

Solving Art's Data Problem - Part One, Museums

April 29, 2019 Jason Bailey
Joseph Siffred Duplessis (French, Carpentras 1725–1802 Versailles). Benjamin Franklin (1706–1790). 1778. Oil on canvas. Oval, 28 1/2 x 23 in. (72.4 x 58.4 cm). The Metropolitan Museum of Art. The Friedsam Collection, Bequest of Michael Friedsam. 32.…

Joseph Siffred Duplessis (French, Carpentras 1725–1802 Versailles). Benjamin Franklin (1706–1790). 1778. Oil on canvas. Oval, 28 1/2 x 23 in. (72.4 x 58.4 cm). The Metropolitan Museum of Art. The Friedsam Collection, Bequest of Michael Friedsam. 32.100.132. https://www.metmuseum.org/art/collection/search/436236

I recently came back from a conference in Bahrain that focused on, among other things, artificial intelligence and machine learning in art. I am as excited as anybody about the potential to apply these new tools to art and art history, but we do not have all that much data about art in a format that is clean, accessible, and easy to analyze. Moreover, without quality data, these new machine learning tools do not add much value to the discourse and use of art.

Lack of data has caused other problems, as well. People debate the exact number (which is likely unknowable), but many people suggest that 15-20% of art in museums and on the market is either forged or misattributed. A lack of quality data on art in an easily accessible format contributes to this problem.

So how do we solve the problems around quantity, quality, and accessibility of data in art? This question has been my focus for the last five years as I have built out the Artnome database of artists’ complete works along with new analytics that can only be derived from such a database. However, tackling a problem of this scale requires collaboration and effort from many different experts and groups attacking the problem from many different angles, including museums, collectors, estates, galleries, and auction houses.

In this first part of my series on art and data, I speak with Neal Stimler, Senior Advisor at the Balboa Park Online Collaborative. Neal served over a decade at The Metropolitan Museum of Art in New York City in successive positions. He worked on rights and permissions, designed digitization workflows for The Met’s collection at scale, oversaw partnerships with the Google Cultural Institute and Wikimedia communities, among other organizations, and was the project manager for The Metropolitan Museum of Art’s Open Access program that launched in 2017. Neal’s expertise in cultural heritage has deep roots in data and digital asset management, but it also incorporates areas of practice that include copyright policy, education, public engagement, operations management, and cross-reality technologies.  

JB: Thanks for joining us, Neal. Let’s start with the basics. What is Open Access?

NS: The term open access is derived from open academia, where the standard is Creative Commons Attribution license or better. Open-Access (OA) content - whether we are talking about a piece of art, a writing or other work - is free of most copyright and licensing restrictions and is often available to the user without a fee. For a work to be OA, the copyright holder grants everyone the ability to copy, use, and build upon the work without restriction. I recommend the essential book Open Access by Peter Suber and Creative Commons’ overview on the topic. The video that most inspired my work in Open Access was “A Shared Culture.” A key aspect of engaging Open Access, too, is awareness and dedication to supporting the public domain.

The adoption of open access in museums and the GLAM sector is relatively more recent than in the academy. In the cultural heritage sector, professionals and supporters center around the GLAM-Wiki and OpenGLAM communities of practice. These communities advocate for open-access policies for data, digital assets, and publications resources from galleries, libraries, archives, and museums (GLAMs). Practitioners within and external to cultural institutions build tools to make these world heritage resources available to the public for uses ranging from commercial to creative to scholarly.

JB: What is involved with a museum making its collection available online? How long does it take for a museum to transition from being closed to open access [OA]?

NS: Some resources to consult in this process include The Rights and Permissions Handbook (American Alliance of Museum OSCI 1st Edition; Rowman and Littlefield, 2nd Edition), “Copyright Checkpoint,” and the “Copyright Cortex.” Some museums may also consider RightsStatements.org and International Image Interoperability Framework (IIIF) to address back-end rights management and image services. The “Collections As Data” project and “Museum APIs” wiki may also be useful resources.  

After performing a thorough rights assessment on the assets in question and after consulting with licensed legal counsel in their jurisdiction, museums then need build tools to provide mass self-serve access to data and digital assets sets. These tools typically come in the form of a museum's collection online website, a public application programming interface (API), and a GitHub repository of data in the .CSV and .JSON formats. Data should be offered with the same permissions and legal frameworks as associated image assets.

Importantly, for a data set to be useful to the broadest spectrum of the public, it must include not only identifying or “tombstone” data for objects, but also rich contextual data like object descriptions, provenance, bibliography, artist biographies, or other data that help users to interpret and understand objects.  The API serves application developers and partners, while .CSV and .JSON formatted data mainly supports researchers and scholars. Open-access content should be hosted in partnership with crucial aggregation platforms such as Wikidata, Wikimedia Commons, and Internet Archive. Other partners and aggregators might be impactful given the nature of the type of collections. Museums, too, should be mindful to evaluate and make decisions with respect to cultural and ethical considerations of open access in collaboration with communities and scholars.  

The process from being “closed” to going open access depends on an institution’s preparedness. An advanced level of digital transformation is required for an institution to manifest policies and deliver the necessary tools in order to provide quality open access services to the public. An absolute commitment to open access and sincere leadership are required at the executive level and upper-level management for open access initiatives to succeed. Open access should represent a broader philosophical shift across all aspects of the museum’s operations and programming. An internal working group or project team from relevant areas across the organization should be assembled. The internal group is led by a project manager who leads the project vision and has ultimate decision-making authority. Partnerships with allied organizations engaged with an institution’s users and working directly with Creative Commons is strongly recommended to implement the best practice approach.

Attributed to Duncan Phyfe (American, born Scottish, 1770–1854). Armchair. 1810–15. Made in New York, New York. United States. Mahogany and brass. 32 3/4 x 20 7/8 x 17 3/4 in. (83.2 x 53 x 45.1 cm). The Metropolitan Museum of Art. Gift of C. Ruxton …

Attributed to Duncan Phyfe (American, born Scottish, 1770–1854). Armchair. 1810–15. Made in New York, New York. United States. Mahogany and brass. 32 3/4 x 20 7/8 x 17 3/4 in. (83.2 x 53 x 45.1 cm). The Metropolitan Museum of Art. Gift of C. Ruxton Love Jr. 60.4.3. https://www.metmuseum.org/art/collection/search/268

JB: What are the benefits of institutions implementing open access policies?

NS: The benefits of museums adopting open access policies are certain, clear, and proven. First, museum users expect open access by default. Museums need to redefine their obligation to “access” in the 21st century. The collection is not theirs; they hold it in the public’s trust, and that comes with responsibilities to serve a broad spectrum of users.

Museums employing a clear Creative Commons standards-based policy and well-developed technology platforms in their open access initiatives may receive a significant positive public response. Museums may also see an increase in website traffic on their sites at the time of launch. This web traffic can extend over the long tail through placing data and digital assets onto partner sites’ platforms, where engaged communities of practice make use of the content. Two crucial partners are Wikimedia platforms and Internet Archive, which authentically serve engagement goals with user communities, as well as provide analytics.

Second, digital humanities and other researchers, as well as data scientists, can perform new models of research and publication with unambiguously marked open content. Open access content enables the building of new intersectional and multimodal knowledge systems that are not possible with the restrictions of “closed” content.

Third, open access museums find that their collections, having been opened, become the go-to sources for data and images by journalists and scholars seeking quickly accessible, high-quality, and confidently rights-cleared content for their publications. Simply put, open access data and images are used, and closed data and images are increasingly not used due to the omnipresent burdens of time, money, and process needed to solve rights issues.  

Fourth, museums that make the transition to open access improve operational efficiency, save money on operations (the image request process), and reduce friction for the benefit of users. Image revenue and licensing as a business for public domain artworks continue to decline. Staff who previously wasted resources manually processing burdensome rights-clearing requests for works in the public domain may now focus on rights cataloging for newly acquired and backlogged objects; can build more accurate and complete collection records; and can increase the amount of comprehensive data that provide greater possibilities for the use and interpretation of collections.

Carleton E. Watkins (American, 1829-1916). Bridal Veil, Yosemite. c. 1865-1866. Albumen print from wet collodion negative. Image: 40.1 x 52.4 cm (15 13/16 x 20 5/8 in.); Matted: 61 x 76.2 cm (24 x 30 in.). The Cleveland Museum of Art. Andrew R. and …

Carleton E. Watkins (American, 1829-1916). Bridal Veil, Yosemite. c. 1865-1866. Albumen print from wet collodion negative. Image: 40.1 x 52.4 cm (15 13/16 x 20 5/8 in.); Matted: 61 x 76.2 cm (24 x 30 in.). The Cleveland Museum of Art. Andrew R. and Martha Holden Jennings Fund. 1992.12. http://www.clevelandart.org/art/1992.12

JB: What can users do today with open access collection content?

NS: We do not yet know the full extent of what is possible. Let’s examine several examples and potential applications for how users can engage with open access collections content as a guide.

Art

The Next Rembrandt, a collaboration with ING and Microsoft with advisement from the Technical University of Delft, Mauritshuis, and Museum het Rembrandthuis, produced a “new” Rembrandt painting using data to algorithmically generate a composite portrait based off defining characteristics of Rembrandt’s style. The project drew upon many data sources, including data and images of Rembrandt portraits, which are largely in the public domain. Without further clarification, this project be cannot be considered an open access example per se, in that the research data, code and final image do not appear to available for reuse by others with an open access license. This kind of research and production does provide a useful example though of how public domain collections can foster creative potential for making new art and re-interpreting art history through data. Future examples could be made with open access artworks and data. Watch the video.

Artificial Intelligence and Machine Learning

Andrew Lih and members of the Wikimedia community used the Wikimedia “The Distributed Game” in a specific iteration “Depicts” to assist AI and machine learning to tag images from The Metropolitan Museum of Art’s open access collections. The new data created through this effort helps create standardized data on a decentralized platform of Wikidata where all can benefit from it rather than the data being solely confined to The Met’s collection online. This is a breakthrough for museums and scholars worldwide. Lih stated the project was, “...a powerful demonstration of how to combine AI-generated recommendations and human verification. Now, with more than 3,500 judgments recorded to date, the Wikidata game continues to suggest labels for artworks from The Met and other museums that have made their metadata available.” In conclusion, Lih wrote, “One benefit of interlinking metadata across institutions is that scholars and the public gain new ways to browse and interact with humanity's artistic and cultural objects.”

Bots

Creative developer Andrei Taraschuk is an art fan who makes Twitter art bots for individual artists to share their work on social media. Taraschuk also created art bots for each curatorial department that The Cleveland Museum of Art made available with its #CMAOpenAccess program. These artworks are now being shared more widely than any one institution could do within the confines of their own social media program. Watch Andrei’s Ignite Boulder 37 talk, “Enriching Social Media Through the Power of Art, Bots and AI.”

Commercial Art Platforms

Artsy is a unique platform in the art data environment for collecting and discovering art because of its museum partnerships, research in the Art Genome Project, and its incorporation of open access images and data from third-party providers. Artsy presents art information from the marketplace along with related works in museum collections. Artsy's collections online website is a rare opportunity to examine and find artworks in museum collections with similar works currently for sale. Artsy's approach is valuable for the history of collecting and studying connoisseurship at the intersection of the art market and art history on a nuanced digital platform. Artsy, for example, incorporates open access artworks from The Cleveland Museum of Art. Artsy also has a focus on open source software development, and its public API provides educational and non-commercial access to images and information for historical and public domain artworks.

Data Visualization

Open access museum collection data can be interpreted and perhaps better understood through computational methods such as data visualization. A key leader in museum data is Jeff Steward at Harvard University Art Museums. Jeff’s 2015 "obJECT" lecture, which is part of the Sightlines series of The Digital Futures Consortium, gives an excellent overview of how museum collection data can be creatively visualized. Watch a video of the “collection blooms” visualization. Read more on Harvard University Art Museum’s Index and explore the API and GitHub pages. In addition, The Tate, from 2013 to 2015, developed a digital strategy and open access digital collections data initiatives. Key figures included John Stack, Elena Villaespesa Cantalapiedra, and Richard Barrett-Small. Data researcher Florian Kräutli created visualizations and provided analysis on the data for Tate and The Museum of Modern Art. The Cleveland Museum of Art partnered with Pandata to do a visualization of their collection with the launch of Cleveland’s open access initiative.

Design

Open access museum images have been used in design collaborations with the Rijksmuseum and Etsy, as well as the National Gallery of Denmark and Shapeways. The Rijksstudio Awards by Rijksmuseum featured a top 30 finalist submission by Dr. Andrea Wallace called the “Pixel and Metadata” dress, where the museum collection data itself became a design object.  

Online Learning

Smarthistory is one of the most accessible online learning resources for public and digital art history. It is an open educational resource, or OER. Its mission is to “open museums and cultural sites up to the world” through blog posts, essays, images, timelines, and videos on art history. Smarthistory has a deep corpus of content that serves learners at high school and undergraduate university levels, as well as lifelong learners. Its content is clearly communicated, well researched, and critically engaged, making it a reliable and progressive learning platform. Smarthistory uses Creative Commons legal tools for the licensing of its publication overall, as well as utilizes Creative Commons designated images to populate its essays and videos. Imagine if museums treated their websites like Smarthistory, using Creative Commons legal tools for content so that others could more freely build, create, and share art online. New types of art publications could be created algorithmically and by humans in the future with a more open approach modeled on and expanded from Smarthistory. Smarthistory was founded by Dr. Beth Harris and Dr. Steven Zucker, who are the executive directors. Dr. Naraelle Hohensee is the managing editor.  

Katsushika Hokusai (Japanese, 1760-1849). Under the Wave off Kanagawa (Kanagawa oki nami ura), also known as The Great Wave, from the series "Thirty-Six Views of Mount Fuji (Fugaku sanjurokkei)." 1830-33. Color woodblock print; oban. 25.4 × 37.6 cm …

Katsushika Hokusai (Japanese, 1760-1849). Under the Wave off Kanagawa (Kanagawa oki nami ura), also known as The Great Wave, from the series "Thirty-Six Views of Mount Fuji (Fugaku sanjurokkei)." 1830-33. Color woodblock print; oban. 25.4 × 37.6 cm (10 × 14 3/4 in.). The Art Institute of Chicago. Clarence Buckingham Collection. 1925.3245. https://www.artic.edu/artworks/24645/

JB: What copyright and data frameworks are the museums you are working with using? What are those frameworks? It seems like institutions have areas of consensus, but also differences in their approaches to open access.

NS: Working in open access means building resources and working “in the commons.” An institution does not have to undertake the open access process in isolation and risk creating a bespoke policy that does not follow the established practices of leading open access institutions and allied organizations like Creative Commons. Creative Commons provides the most widely used, interoperable, and globally standardized legal framework for open access. The Creative Commons Zero Public Domain Dedication is the most open and permissive tool, as well being the most commonly used by leading cultural institutions who seek to assertively remove as many barriers as possible to foster the use, reuse, and remix of their collections. Note that it would not be considered open access if a museum applied a Creative Commons Attribution license to digitized objects in the public domain.

Some GLAM institutions have implemented conditions that require users to “share-alike,” meaning that creators who use “share-alike” content must offer their new creation or derivative work under the same conditions as the source material. While the “share-alike” concept may appear more progressive, it may potentially hinder the freedom of expression, individual liberty, and interpretation of others with its dependent contingencies. Share-Alike was intended to help build and expand the commons, but it may more often act as a deterrent, causing users to look elsewhere for content that can be used without undue burden on their creative production and consistent with other harmonious terms like Creative Commons Zero. Furthermore, museums may not have a right to license under share-alike, therefore creating confusion for both institutions and users. The application of other licenses like share-alike or non-commercial should only be considered for works created by the institution, where they hold the copyright as opposed to the digitization of underlying works that are in the public domain.

Some museums, early in the development of open access, created specific policies for open access in their terms and conditions or by using the statement “public domain.” It is important that cultural institutions understand that the concepts and legal framework for “public domain” are determined by a range of factors, and is often dependent on country-specific or national definitions. Some institutions may use the Creative Commons Public Domain Mark for collection images and data, but this tool does come with considerations around works that may have a “hybrid” public domain status, meaning they have a status that is “public domain in some jurisdictions but may also be known to be restricted by copyright in others.”

Museums especially should opt for Creative Commons Zero when applicable to digitized collections or museum produced content because it, as stated on the Creative Commons website, “provides the best and most complete alternative for contributing a work to the public domain given the many complex and diverse copyright and database systems around the world,” and “clarifies the status of your work unambiguously worldwide and facilitates reuse.” The commons of the Internet is a realm of production beyond any one nation or group. Museums doing open access should desire to see their collections engaged and used assertively on a global scale.

Margareta Haverman (Dutch, Breda 1693–1722 or later). A Vase of Flowers. 1716. Oil on wood. 31 1/4 x 23 3/4 in. (79.4 x 60.3 cm). The Metropolitan Museum of Art. Purchase. 71.6. https://www.metmuseum.org/art/collection/search/436634

Margareta Haverman (Dutch, Breda 1693–1722 or later). A Vase of Flowers. 1716. Oil on wood. 31 1/4 x 23 3/4 in. (79.4 x 60.3 cm). The Metropolitan Museum of Art. Purchase. 71.6. https://www.metmuseum.org/art/collection/search/436634

JB: I always get super excited every time another museum makes its collection open access, but to be honest, it is not always clear how to engage with this content. I feel like in addition to making data and assets available, we are missing the tools to make it easier for the average person to consume, filter, and mine all of this data for exciting insights and to tell their own stories or do their own research using the content. Do you agree? Are you aware of efforts to make museums’ collections easier to analyze and consume?

NS: Baseline content elements and tools for museums to deliver open access are identified in this text. They are more mature than people may realize. The GLAM sector does need to improve tools for working collaboratively at scale with decentralized and distributed data and digital assets at the peer level. Between museums and partners, what is needed are highly automated and sustainable pipelines for digital assets to connect and to be distributed online to partners and subsequently end-users. In terms of tools for end-users, there are exemplary artists, developers, and scholars working with museum content. Those creators, whether independent or institutionally affiliated, have the tools they need to make in their contexts. Active partnership with museums can maximize creative output and benefit makers. Museums also need to do the due diligence of documenting and sharing open access projects made from their collections that they admire to inspire others and build a greater corpus of relevant examples.

The first plateau for any museum to reach is to make data, images, and publications open access. After that best practice step, museums must understand and commit to the future development of open access initiatives for the long term as being equal to exhibition-making, collecting objects, conservation, and scholarly publishing. Open access is a pillar of both museum content development and community engagement. Open access is not a “set it and forget it” scenario. Open access requires not only ongoing operational and technical maintenance, but sincere incorporation into the programmatic functions of a museum such as education, public programs, and scholarly publishing. The answer to the public engagement question for the long term with open access museum collections is not one-time contests, festivals, or hack-a-thons. These short-term tactics will not achieve a museum’s goal of deep and authentic engagement with users because they do not scale and are not part of annually budgeted programmatic efforts.

The critical opportunity for museums is to co-produce knowledge systems and experiences of collections built in collaboration with users. I’ve written about this in detail recently for the Museums and the Web 2019, Boston conference paper, addressing the historical development of collections online and “Wikification.” The ability for users to see their contributions manifested, reflected and impacting the ways that museums carry out their missions on a data level is needed whether it be on a museum’s collection platform, a third-party site such as Wikimedia or a user’s independent creative project. Museums need to commit to working together on tool development and resources that work well beyond small consortia and self-selecting peer groups. The impact and scale of museum collections come into fruition when they are part of an ecosystem of content on popular and commercial applications familiar as well as widely adopted by users with a diverse range of interests and skills.  

Making museum data easier to analyze, consume, and create with for users is a necessary part of the hard work of digital transformation that responsible museums must do. Museums can remain relevant by providing essential services for cultural production and consumption in the digital world. Museums must prioritize an operational philosophy and practice that efficaciously meets the transactional customer expectations of not only millennials, the rising dominant global generation, but also the successive-born digital generations who will have even higher levels of synchronicity between digital and physical lives. Artificial intelligence and machine learning, along with human interaction, have the potential with open access to help museums make more meaningful user connections through accessible, multilingual, and translated content, as well. Commercial businesses have already prioritized customer needs with new technology developments. Museums also can optimize the use of these tools to offer potential benefits for human connectivity and greater mutual understanding, especially when engaged with museum content with open access.

JB: Have you spoken to museums that are afraid to make their collections open access? If so, what drives the fear and how do you overcome this?

NS: Yes, I am in frequent conversations with museum clients about how to make the open access transition for their institutions. Most fear in this regard stems from the same conditions that undermine positive improvement in other aspects of business and life: uninformed anecdotes; too much self-focus; and a misguided sense of tradition that says “this is the way we have always done it” or “we are limited by an edge case.” While there may be real on-the-ground obstacles to taking on open access for an institution, it is important to face change and have the will to move forward. In addition to pointing the major open access success stories of leading institutions, I encourage executive leaders and staff throughout GLAM communities to remember their missions and responsibilities to the public they serve. Open Access is “mission critical” for museums.

William Henry Fox Talbot (British, Dorset 1800–1877 Lacock). A Scene in a Library. Before March 22, 1844. Salted paper print from paper negative. Image: 13.3 x 18 cm (5 1/4 x 7 1/16 in.). The Metropolitan Museum of Art. Gilman Collection, Gift of Th…

William Henry Fox Talbot (British, Dorset 1800–1877 Lacock). A Scene in a Library. Before March 22, 1844. Salted paper print from paper negative. Image: 13.3 x 18 cm (5 1/4 x 7 1/16 in.). The Metropolitan Museum of Art. Gilman Collection, Gift of The Howard Gilman Foundation. 2005.100.172. https://www.metmuseum.org/art/collection/search/283066

JB: In addition to museum collection data, there are catalogue raisonné data and gallery and auction records. The New York Public Library defines catalogue raisonné as “a comprehensive, annotated listing of all the known works of an artist either in a particular medium or all media.”  In a perfect world, we would have an artist-level view of all of the works an artist has created, where they currently reside, and where they have been in the past. How could catalogue raisonné data be useful working across museums and with estates, galleries, libraries, or auction houses in a unified and decentralized manner?

NS: Catalogue raisonné data is particularly interesting because, in aggregate with open access, it has the potential to transform how the history of collecting and provenance are studied across public and private collections over time. Catalogue raisonné numbers are facts that are not copyrightable. The difficulty in many cases with this data is that catalogue raisonné data is mostly still only in print format. In the case of 20th or 21st-century artists, this data remains the purview of artists’ estates or representatives whose primary interest is focused on the accounting and value promotion of a particular artist’s work rather than building shared knowledge through comparative research with other artists or collections. Catalogue raisonné projects published in print, or those in a digital format with restrictive or closed access, are prime examples of costly, inefficient, and outdated knowledge production processes. Moreover, they are “data silos.”

If catalogue raisonné numbers and data were published as open access, they would provide richer cataloging records for museum collections around the world through shared bibliographic data and enable museums to focus energy on creating new catalogue records for new or unprocessed collections. Catalogues raisonné are a collaborative publication in which academics and curators work together to produce knowledge, although typically in an enclosed and invite-only process. The perspectives contributed by external and independent scholars in making a catalogue raisonné entry are not often incorporated at the same level of authority as internal curatorial knowledge within museum collections online and may only be incorporated as citations or when absorbed into summary knowledge as presented to the public in an object description or label. Wikidata can act as a unified and decentralized platform where catalogue raisonné numbers and data could have a broader impact. From Wikidata, catalogue raisonné data could be used by museums as well as auction houses, collectors, and scholars. Wikimedia contributor Jane Darnell mentioned to me in a tweet that she digitizes catalogue raisonné data from old publications for use on Wikidata as related to WikiProject sum of all paintings. Jane shared examples of catalogue raisonné and Wikidata work on the paintings of Hofstede de Groot and Bartholomeus van der Helst.  

Some examples of digital catalogue raisonné include SFMOMA Rauschenberg Research Project, Pieter and Jan Brueghel sites, and Artifex Press. A model that points to a more progressive future is the Paul Mellon Centre for Studies in British Art catalogue raisonné on the artist Francis Towne. In its copyright page, the Francis Towne catalogue provides nuanced details about the rights status of the overall publication and elements within it. The Towne catalogue [as a whole publication] is offered under Creative Commons Attribution-NonCommercial 3.0 Unported license, along with acknowledgment of sourcing open access images using Creative Common Zero. The online publication provides a search filter to find open access images directly within the catalogue itself that may be downloaded at high resolution and reused as related to terms of the source image. For other museum examples, although not catalogue raisonnés, consult Ancient Terracottas: From South Italy and Sicily In the J. Paul Getty Museum, and The Digital Walters. These digital publications offer downloadable and rich content packages and use Creative Commons legal tools.

Challenges concerning the current states of catalogue raisonnés speak to ongoing difficulties in the education, training, skills development, and present condition of digital art history and scholarly practice. Art historians are still working mainly in outmoded practices of knowledge production that can be made be more collaborative, transparent, and synchronized when compiling catalogue raisonnés not only as digital first, but open access as publications beyond images and text into data and code. The code can also be published with companion open source legal tools with Creative Commons licensed content and may work in conjunction with a Creative Common Zero Public Domain Dedication.

Rembrandt van Rijn (Dutch, 1606-1669). Self-Portrait Leaning on a Stone Sill. 1639. Etching and drypoint on cream laid paper. Sheet: 20.6 x 16.3 cm (8 1/8 x 6 7/16 in.); Platemark: 20.4 x 16.1 cm (8 1/16 x 6 5/16 in.). The Cleveland Museum of Art. B…

Rembrandt van Rijn (Dutch, 1606-1669). Self-Portrait Leaning on a Stone Sill. 1639. Etching and drypoint on cream laid paper. Sheet: 20.6 x 16.3 cm (8 1/8 x 6 7/16 in.); Platemark: 20.4 x 16.1 cm (8 1/16 x 6 5/16 in.). The Cleveland Museum of Art. Bequest of Mrs. Severance A. Millikin. 1989.244. http://www.clevelandart.org/art/1989.244

JB: How do museums manage rights and permissions issues? Do museums own the copyright for the images of their collections? Can people feel free to modify or share the images they find through the open access initiatives for these museums?

NS: Rights and permissions management for art museums can involve many roles internal and external to an organization. It may include staff within an organization such as rights and permissions managers, collections managers, legal counsel, registrars, curators, conservators, and in important cases, even museum directors. External to an organization are artist’s estates and their representatives, who may be the exclusive agent representing an artist’s rights for copyright use requests. For works in copyright, art museum staff work in close coordination with artists; estates and their representatives to review use and permission requests on a case-by-case basis. Loans and other restrictions can apply to works, as well, often defined on a contractual basis between parties. It is crucial to distinguish the fees charged by art museums for digitization vs. fees charged for rights and permissions requests. Assessing fees for digitization may be appropriate for the costs of museum staff labor (e.g., handling objects, photography, post-production), time, and resources.

The rights and permissions process is a highly manual, labor-intensive, time-consuming and often costly process for the museum and end user. Fees are assigned for projects for a variety of factors. The rights and permissions process within art museums acts more like “gatekeeping” to deny access to the use of artworks by the public, either at the behest of the specific institution or by the rights’ holder. A significant limitation of the rights and permissions process across the GLAM sector is that it is primarily focused on processing image requests, largely leaving no standard mechanisms or process for other content packages such as code, data, text, and multimedia asset requests. Another limitation is that these requests are typically handled through email or online web forms that take days to weeks to process.

Users need to understand the details of licenses and terms of use statements because these details vary between objects and institutions. It is prudent for users to cross check multiple databases and do thorough image rights research as part of their process. Open access is part of the necessary reform to the landscape of rights and permissions pitfalls. Unambiguous and legally operative terms like Creative Commons Zero make the ability to use and reuse clearer for users. The public should have confidence using open access museum content in their creative projects as aligned with the terms of use.

Marie Denise Villers (French, Paris 1774–1821 Paris (?)). Marie Joséphine Charlotte du Val d'Ognes (1786–1868). 1801. Oil on canvas. 63 1/2 × 50 5/8 in. (161.3 × 128.6 cm). The Metropolitan Museum of Art. Mr. and Mrs. Isaac D. Fletcher Collection, B…

Marie Denise Villers (French, Paris 1774–1821 Paris (?)). Marie Joséphine Charlotte du Val d'Ognes (1786–1868). 1801. Oil on canvas. 63 1/2 × 50 5/8 in. (161.3 × 128.6 cm). The Metropolitan Museum of Art. Mr. and Mrs. Isaac D. Fletcher Collection, Bequest of Isaac D. Fletcher. 17.120.204. https://www.metmuseum.org/art/collection/search/437903

JB: Improving art data to preserve and protect our art historical records is something I think about a lot. I worry that we may not get there in my lifetime. How would you describe your view of the need to improve art data? How does this look? How long do you think it will take us to get there? What are the biggest stumbling blocks to improving art data? How do we overcome them?

NS: We are already on the way to improving the quality of art data in the broadest sense of the concept. The GLAM sector continues to see steady progress for its commitment to open access around the world. There are successions of new institutions joining the open access wave. Just think about what has been achieved already and where we are right now. Some of the world’s leading and most significant institutions have made the open access transition with sincere public declarations and celebrations of their collections. Those institutions that lag behind must be held to account by their directors, boards, and staff to implement an open access future. Open access is a plateau that institutions must reach as soon as possible if they wish to participate in the next tier of digital, educational, and culturally relevant efforts that are inextricably interlinked with global technological innovation. Much has been achieved. More is to be done.

I see a future of open art data where entire ecosystems and suites of content (e.g., code, data, images, multimedia assets, and texts) are circulating in creative production between humans and machines, or what Director of MIT Media Lab Joi Ito refers to as “extended intelligence.” I can imagine a landscape where museum publishing becomes increasingly automated by bots pulling from open access texts, which is an exciting opportunity, but also speaks to the urgent need to improve infrastructure and copyright policy to expand our possibilities for making an inclusive and boundary-traversing art history. I see new applications being built by the commercial sector in partnership with museums that improve the user experience of exhibitions and collections. I imagine new commercial products being made in brand partnerships with new businesses that increase revenue and operational sustainability for museums. The road will be built collaboratively with iterative joint efforts from commercial and prosocial actors. Wikimedia platforms can have a vital role to play as a shared and unified, yet decentralized, third space where the integrated knowledge systems can be formed as they have not been before.

The biggest stumbling blocks are apathy, doubt, and fear. Museums and those allied across the cultural heritage communities can overcome these obstacles with dedication, mutual support, and ultimate concern for our users: the public. Museums, too, must prioritize users' liberty and individual self-actualization. As Merete Sanderhoff, Curator and senior advisor at the National Gallery of Denmark, stated in “The Only Way is Open,” open access aims to make “human creativity from all times and all corners of the world accessible to all citizens, to foster new knowledge and inspire new creativity.”   

Vilhelm Hammershøi (Danish, 1864-05-15 - 1916-02-13). Interior in Strandgade, Sunlight on the Floor. 1901. Oil on canvas. 46.5 x 52 cm. The National Gallery of Denmark. The Royal Collection of Paintings and Sculptures. KMS3696. https://www.smk.dk/en…

Vilhelm Hammershøi (Danish, 1864-05-15 - 1916-02-13). Interior in Strandgade, Sunlight on the Floor. 1901. Oil on canvas. 46.5 x 52 cm. The National Gallery of Denmark. The Royal Collection of Paintings and Sculptures. KMS3696. https://www.smk.dk/en/highlight/stue-i-strandgade-med-solskin-paa-gulvet-1901/

JB: Is there anything else you want to share, Neal?

NS: I want to thank my colleagues Nik Honseysett, Daniel Brennan, Michael Weinberg, and Ryan Merkley for their constructive feedback on this interview. Thank you, Jason, for the invitation to collaborate on this project. Those interested in working with me as a consultant can send me an message on the contact page of my website, Twitter or via LinkedIn.

Comment

Giving Generative Art Its Due

April 17, 2019 Jason Bailey
Mantel Blue, Manolo Gamboa Naon (Personal collection of Kate Vass), 2018

Mantel Blue, Manolo Gamboa Naon (Personal collection of Kate Vass), 2018

I have long dreamed of attending an art exhibition that presented the full range of generative art starting with early analog works of the late 1950s and ranging all the way up to new AI work we have seen in just the last few years. To my knowledge, no such show has ever existed. Just to attend such a show would be a dream come true for me.

So when the Kate Vass galerie proposed that I co-curate a show on the history of generative art, I thought I had died and gone to heaven. While I love early generative art, especially artists like Vera Molnar and Frieder Nake, my passion is really centered around contemporary generative art. So pairing up with my good friend Georg Bak, expert in early generative photography, was the perfect match. Georg brings an unmatched passion and detailed understanding of early generative art that firmly plants this show in a deep and rich tradition that many have yet to learn about.

As my wife can attest, I have regularly been waking up at four in the morning and going to bed past midnight as we race to put together this historically significant show, unprecedented in its scope.

I couldn’t be more enthusiastic and proud of the show we are putting together and I am excited to share the official press release with you below:


Invitation for Automat Und Mensch (Man and Machine)

Invitation for Automat Und Mensch (Man and Machine)

“This may sound paradoxical, but the machine, which is thought to be cold and inhuman, can help to realize what is most subjective, unattainable, and profound in a human being.” - Vera Molnar

In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.  

Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us - ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.

Though slow to gain traction in the traditional art world, generative art produces elegant and compelling works that extend the very same principles and goals that analog artists have pursued from the inception of modern art. Geometry, abstraction, and chance are important themes not just for generative art, but for much of the important art of the 20th century.

Every generation claims art is dead, asking, “Where are our Michelangelos? Where are our Picassos?” only to have their grandchildren point out generations later that the geniuses were among us the whole time. With generative art we have the unique opportunity to celebrate the early masters while they are still here to experience it.

 
9 Analogue Graphics, Herbert W. Franke, 1956/’57

9 Analogue Graphics, Herbert W. Franke, 1956/’57

 

The Automat und Mensch (Man and Machine) exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.

 
Correction of Rubens: Saturn Devouring His Son, Robbie Barrat, 2019

Correction of Rubens: Saturn Devouring His Son, Robbie Barrat, 2019

 

Emphasizing the deep historical roots of AI and generative art, the show takes its title from the 1961 book of the same name by German computer scientist and media theorist Karl Steinbuch. The book contains important early writings on machine learning and was inspirational for early generative artists like Gottfried Jäger.

We will be including in the exhibition a set of 10 pinhole structures created by Jäger with a self-made pinhole camera obscura. Jäger, generally considered the father and founder of “generative photography,” was also the first to use the term “generative aesthetics” within the context of art history.

10 Pinhole Structures, Gottfried Jäger, 1967/’94

10 Pinhole Structures, Gottfried Jäger, 1967/’94

We will also be presenting some early machine-made drawings by the British artist Desmond Paul Henry, considered to be the first artist to have an exhibition of computer generated art. In 1961 Henry won first place in a contest sponsored in part by well-known British artist L.S. Lowery. The prize was a one-man show at The Reid Gallery in August, 1962, which Henry titled Ideographs. In the show, Henry included drawings produced by his first drawing machine from 1961 adapted from a wartime bombsight computer.

Untitled, Desmond Paul Henry, early 1960s

Untitled, Desmond Paul Henry, early 1960s

The show features other important works from the 1960s through the 1980s by pioneering artists like Vera Molnar, Nicolas Schoeffer, Frieder Nake, and Manfred Mohr.

We have several generative works from the early 1990s by John Maeda, former president of the prestigious Rhode Island School of Design (2008-2014) and associate director of research at MIT Media Lab. Though Maeda is an accomplished generative artist with works in major museums, his greatest contribution to generative art was perhaps his invention of a platform for artists and designers to explore programing called "Design By Numbers."

Casey Reas, one of Maeda’s star pupils at the MIT Media Lab, will share several generative sketches dating back to the early days of Processing. Reas is the co-creator of the Processing programing language (inspired by Maeda’s “Design By Numbers”) which has done more to increase the awareness and proliferation of generative art than any other singular contribution. Processing made generative art accessible to anyone in the world with a computer. You no longer needed expensive hardware, and more importantly, you did not need to be a computer scientist to program sketches and create generative art.

This ten minute presentation introduces the Process works created by Casey Reas from 2004-2010.

Among the most accomplished artists to ever use Processing are Jared Tarbell and Manolo Gamboa Naon, who will both be represented in the exhibition. Tarbell mastered the earliest releases of Processing, producing works of unprecedented beauty. Tarbell’s work appears to have grown from the soil rather than from a computer and looks as fresh and cutting edge today as it did in 2003.

Substrate, Jared Tarbell, 2003

Substrate, Jared Tarbell, 2003

Argentinian artist Manolo Gamboa Naon - better known as “Manolo” - is a master of color, composition, and complexity. Highly prolific and exploratory, Manolo creates work that takes visual cues from a dizzying array of aesthetic material from 20th century art to modern-day pop culture. Though varied, his work is distinct and immediately recognizable as consistently breaking the limits of what is possible in Processing.

aaaaa, Manolo Gamboa Naon, 2018

aaaaa, Manolo Gamboa Naon, 2018

With the invention of new machine learning tools like DeepDream and GANs (generative adversarial networks), “AI art,” as it is commonly referred to, has become particularly popular in the last five years. One artist, Harold Cohen, explored AI and art for nearly 50 years before we saw the rising popularity of these new machine learning tools. In those five decades, Cohen worked on a single program called Aaron that involved teaching a robot to create drawings. Aaron’s education took a similar path to that of humans, evolving from simple pictographic shapes and symbols to more figurative imagery, and finally into full-color images. We will be including important drawings by Cohen and Aaron in the exhibition.

AI and machine learning have also added complexity to copyright, and in many ways, the laws are still catching up. We saw this when Christie’s sold an AI work in 2018 by the French collective Obvious for $432k that was based heavily on work by artist Robbie Barrat. Pioneering cyberfeminist Cornilia Sollfrank explored issues around generative art and copyright back in 2004 when a gallery refused to show her Warhol Flowers. The flowers were created using Sollfrank’s “Net Art Generator,” but the gallery claimed the images were too close to Warhol’s “original” works to show. Sollfrank who believes “a smart artist makes the machine do the work” believed she had a case that the images created by her program were sufficiently differentiated. Sollfrank responded to the gallery by recording conversations with four separate copyright attorneys and playing the videos simultaneously. In this act, Sollfrank raised legal and moral issues regarding the implications of machine authorship and copyright that we are still exploring today. We are excited to be including several of Sollfrank’s Warhol Flowers in the show.

 
Anonymous_warhol-flowers, Cornelia Sollfrank

Anonymous_warhol-flowers, Cornelia Sollfrank

 

While we have gone to great lengths to focus on historical works, one of the show’s greatest strengths is the range of important works by contemporary AI artists. We start with one of the very first works by Google DeepDream inventor Alexander Mordvintsev. Produced in May of 2015, DeepDream took the world by storm with surreal acid-trip-like imagery of cats and dogs growing out of people’s heads and bodies. Virtually all contemporary AI artists credit Mordvintsev’s DeepDream as a primary source of inspiration for their interest in machine learning and art. We are thrilled to be including one of the very first images produced by DeepDream in the exhibition.

 
Cats, Alexander Morrdvintsev, 2015

Cats, Alexander Morrdvintsev, 2015

 

The show also includes work by Tom White, Helena Sarin, David Young, Sofia Crespo, Memo Akten, Anna Ridler, Robbie Barrat, and Mario Klingemann.

Klingemann will show his 2018 Lumen Prize-winning work The Butcher’s Son. The artwork is an arresting image that was created by training a chain of GANs to evolve a stick figure (provided as initial input) into a detailed and textured output. We are also excited to be showing Klingemann’s work 79543 self-portraits, which explores a feedback loop of chained GANs and is reminiscent of his Memories of Passersby which recently sold at Sotheby’s.

 
The Butcher’s Son, Mario Klingemann, 2018

The Butcher’s Son, Mario Klingemann, 2018

 

Automat und Mensch takes place at the Kate Vass Galerie in Zürich Switzerland and will be accompanied by an educational program including lectures and panels from participating artists and thought leaders on AI art and generative art history. The show runs from May 29th to October 15th, 2019.

Participating Artists:

Herbert W. Franke

Gottfried Jäger

Desmond Paul Henry

Nicolas Schoeffer

Manfred Moor

Vera Molnar

Frieder Nake

Harold Cohen

Gottfried Honegger

Cornelia Sollfrank

John Maeda

Casey Reas

Jared Tarbell

Memo Akten

Mario Klingemann

Manolo Gamboa Naon

Helena Sarin

David Young

Anna Ridler

Tom White

Sofia Crespo

Matt Hall & John Watkinson

Primavera de Filippi

Robbie Barrat

Harm van den Dorpel

Roman Verostko

Kevin Abosch

Georg Nees

Alexander Mordvintsev

Benjamin Heidersberger

For further info and images, please do not hesitate to contact us at: info@katevassgalerie.com  

3 Comments

Autoglyphs, Generative Art Born On The Blockchain

April 8, 2019 Jason Bailey
Collection of four Autoglyphs

Collection of four Autoglyphs

If you are a regular Artnome reader, you know we are big on blockchain and generative art. So of course I was super excited when my good friends Matt Hall and John Watkinson of CryptoPunks fame gave me a sneak peek of Autoglyphs, their new project which creates old-school generative art that literally lives on the blockchain.

In this post I nerd out with Matt and John about Autoglyphs, grilling them with all kinds of questions including:

  • What are Autoglyphs and how do they work?

  • How do Matt and John manage to actually put art on the blockchain?

  • Did early generative art serve as inspiration for Autoglyphs?

  • Why did they create just 512 out of billions of possible Autoglyphs?

  • What are the differences between Autoglyphs and CryptoPunks?

  • Do Matt and John think of themselves as artists?

  • What makes a good Autoglyph?

Autoglyphs are highly unique because traditionally, the actual image files associated with blockchain art like CryptoPunks, CryptoKitties, or Rare Pepe are stored in a database somewhere “off chain,” meaning off of the blockchain. Artists typically address this “off chain” storage by including a link to the image from the blockchain called a “hash” so that you can locate the image file for your artwork from its record.  For example, even though the image of my CryptoPunk is comprised of relatively few pixels, it actually lives “off chain” on the LarvaLabs server at:

https://www.larvalabs.com/public/images/cryptopunks/punk2050.png

Screen Shot 2019-03-30 at 6.40.36 PM.png

This means the actual artwork does not technically benefit from any of the tamper-proof advantages like “decentralization” or “immutability” typically associated with the blockchain (unless you think of the token itself as the artwork instead of the image). Put another way, there is nothing stopping someone from altering, moving, or removing the image from the location the hash is pointing to. If that were to happen, all you would be left with is an immutable record stating that you own an artwork, with no way of actually seeing it.

Perhaps you are thinking, “Why not just store the image on the blockchain? It is, after all, a database, right?” Well, blockchain is great for a lot of things, but storing large image files is not one of them. Unless you can make art with a super tiny footprint, it is impractical to store traditional image files like JPEG or PNG on the blockchain.

This is what makes Autoglyphs so damn cool. Matt and John decided to accept the storage limitations of the blockchain as a challenge to see what they could create that could actually be stored “on chain.”

Michael A Noll, Computer Composition with Lines, 1964, digital computer and microfilm plotter

Michael A Noll, Computer Composition with Lines, 1964, digital computer and microfilm plotter

Piet Mondrian, Composition With Lines, second state, 1916-17, oil on canvas, ©Rejikmuseum Kröller-Müller


Piet Mondrian, Composition With Lines, second state, 1916-17, oil on canvas, ©Rejikmuseum Kröller-Müller

I love this idea because it is a throwback to the compute and storage challenges that the earliest generative artists like Michael Noll and Ken Knowlton faced when trying to create art on computers in the early 1960s. As you will see, this is not lost on Matt and John, who are huge fans of early generative art and decided to embrace the aesthetic and run with it. With that, let’s jump into the interview.

Autoglyphs - An Interview with Matt Hall and John Watkinson

glyph41.png

Jason Bailey:  Thanks for chatting, guys. I have a bunch of questions, but I’m happy to start with the easy one. What was the impetus or inspiration behind Autoglyphs?

John Watkinson: There is a lot of talk of art on the blockchain. With the CryptoPunks, all of the ownership and the provenance is permanently and publicly available, and those rules are set and fixed. And yet there's still a bit of an imperfection there in that the art comes from outside of the blockchain and stays out there, and it's just referenced by a smart contract. We don't have any complaints about the CryptoPunks, but it felt like there was an opportunity to go further. With Autoglyphs, we asked ourselves, “Can we make the entire thing completely self-contained and completely open and operating on the blockchain?”

JB: So the decision to literally store the artwork on the blockchain comes with some pretty hardcore restrictions, right? What sort of parameters are you now boxing yourself into once you make that decision?

JW: You have to have very small and efficient code generating the work. The actual output of the work has to be a very small amount of data or text because you can't have a large amount of data on the blockchain. So a small amount of efficiently running code, and fairly small, efficient output.

Those were the constraints, and they were pretty extreme. For a while thought we couldn't do it, or couldn't do it in a way that was satisfying for us. I was sort of exploring various generators and trying to make them more efficient, just binary image generators. I got to one that I thought was pretty good and I then experimented with it, trying to turn it into a smart contract, and I just couldn't get it to work. It was just hitting limits and wasn't working at all.

Then I tried it a few months later and just pushed it a little further and just got there. Still, the transaction fee of making an Autoglyph is going to be about half of an Ethereum block. So an Ethereum block is about eight million gas, so that's how much computation can happen in one mined block of Ethereum, and this is going to be three million gas, so it's almost half a block.

That means that the transaction fees will be relatively expensive - between one and two dollars - depending on the price of gas. So it's a pretty hefty transaction. If we went much more than that, we would already be outside of feasibility. If we went over eight million, it would be completely impossible, you wouldn't be able to do it.

JB: Got it. Dumb question: Does the code for generating the image live on the blockchain? Or is there actually an image on the blockchain?

JW: The code lives on the blockchain, and in fact, when you ask the blockchain for the image, it will just generate it again for you. That part happens on a end node, so that doesn't cost any actual money or gas. But whenever you say, “Give me the image for Autoglyph five", it will just generate it again for you based on the seed information that was created in the transaction.

Matt Hall: It's probably also worth making the distinction between the image and the instructions to generate different representations of it. The actual image you see on the website is not generated on the blockchain. The art, the instructions for how to write it are on the blockchain, but we make an SVG or PNG file on the web server. If that was your question, then no, the actual image data doesn't come off the blockchain, but there's an ASCII representation of exactly what that is on there. It's an ASCII art representation of the glyph.

Screen Shot 2019-04-01 at 11.22.33 AM.png

JB: Nice. That was going to be my next question. I love ASCII art, and I assumed that it was generating some sort of ASCII format. So the ASCII art version of the image is an image made out of text and is actually on the blockchain. But in addition to that, you're generating PNGs or JPEGs for end user convenience that you've got hosted at Larva Labs? Is that a fair way to put it?

Screen Shot 2019-04-01 at 11.25.14 AM.png

JW: Yes. We're generating the image, and we basically created instructions on how to do that. So in the source code for the actual smart contract, if you scroll down a little bit below that big ASCII art “Autoglyphs,” you'll see that there are these little instructions. For every ASCII art character, it tells you how to draw it. We generate image files that way. But the idea is that anyone can generate it - kind of like a Sol LeWitt instruction set for creating a drawing. If you own a glyph, then you can make it at any scale, with any materials you want. You can make your Autoglyph using these instructions.

lewitt_49_instructions.jpg

JB: Great. That was going to be my next question. Is it a bit like a Sol LeWitt, where essentially if Larva Labs, God forbid, goes out of business and you decide that you no longer want to support the interface people will have everything they need built within this little blockchain code to infinitely generate these Autoglyphs? Will Autoglyphs outlive us all?

Sol LeWitt, Wall Drawing 87, June 1971

Sol LeWitt, Wall Drawing 87, June 1971

JW: Yes, that's the idea. They'll be able to make their Autoglyphs and follow these instructions to render them. We have a little pen plotter, we're going to make some of our Autoglyphs physically rendered with that, which is kind of just for fun. It's well set up for plotting that way.

MH: We were kicking around different versions of this and then we saw this show at the Whitney. It is a retrospective of a bunch of digital art. They have early generative art and all sorts of different stuff. There was this big Sol LeWitt piece, and they were explicit about how this piece had been executed by an assistant at the gallery, but that's in keeping with the intention of the artwork and the instructions of the artist. We thought that was good, it was perfect, because we can't do a lot of things we want to do directly on the blockchain, but we can have the spirit of it be completely self-contained.

Sol LeWitt (1928-2007), Wall Drawing #289, 1976

Sol LeWitt (1928-2007), Wall Drawing #289, 1976

By providing them with the instructions on the blockchain, now the art can be rendered very large and detailed. For example, we could have stored these as tiny pixel graphics, graphs, something like that, but then you're limited to that. This way they can operate at any scale and in any material.

JB: It does feel like a throwback to some of the early generative art. I'm thinking like Ken Knowlton and Michael Noll. Other than Sol LeWitt, were there other artists who inspired the Autoglyphs? Or do they just look like old-school generative art due to the storage limitations of the blockchain?

Ken Knowlton, from the pages of Design Quarterly 66/67, Bell Telephone Labs computer graphics research.

Ken Knowlton, from the pages of Design Quarterly 66/67, Bell Telephone Labs computer graphics research.

JW: A little bit of both. We definitely needed to clamp down the parameters pretty hard because of the technical requirements, but we'd been getting into the early pioneering digital art of the '60s and early '70s stuff. It's definitely an homage to Michael Noll and Ken Knowlton and that kind of stuff, which we really love. Only once we got to this digital art world via the CryptoPunks did we really realize how much of all this stuff had been explored in the '60s. It’s almost humbling how much ground was covered so quickly in digital art in the '60s and early '70s.

JB: Yeah, I love early generative art. It looks like from the Autoglyphs site that the algorithm, while it had to be simple by definition, is capable of generating billions of unique artworks, but then there are 512 that ultimately will be produced before it stops, right? So how do those 512 get selected among the billions of possible works? And second part of the question, why 512?

MH: Good question. They're going to be randomly seeded. There's a random seed that goes into the algorithm to generate them, and if you operate the contract manually, you can specify the seed manually - but you can't reuse an existing seed that's already been used to make a glyph. We debated whether to limit it or not, whether to make it so that everyone and anyone can come and get their glyph. There are a few arguments in each direction, but ultimately when you make generative art like this, the generator kind of is the artwork a little bit, and there's so much it can express.

It's basically a very tiny generator. If you scroll down in that source code, the core of the generator is the draw function, which is only about forty lines. So we said, “At what point does a generator kind of play itself out, where you've seen everything?” You could make more, but it's just going to be like, “Oh, that's similar to that one, that's similar to that one,” so how much surprise and variety can it really deliver? So we found that threshold.

We made it a power of two just to keep it nerdy. But that was the around the threshold where we said, “This is about the right amount of these things in order to fully explore the generator but not make them all worthless because there's a myriad of other ones similar. This should be enough to discover cool surprises and get a sense of what it can generate and have a good collection out there, but not hit it too hard and destroy all the mystery of it.”

Drawing code for Autoglyphs

Drawing code for Autoglyphs

JB: Sweet. And then you mentioned on the site that 128 of the Autoglyphs are already claimed, so who claimed them?

JW: We're going to claim those. We want to have a decent chunk that we can explore and mess around with, and we want to display them in large groups together. That's how many we're taking for ourselves and the rest are going to be up for grabs.

MH: It's a similar model to the CryptoPunks, where we wanted to convert ourselves into the same kind of relationship to the artwork as everyone else. So we just become owners after the thing is launched, and we like how that sort of played out on CryptoPunks. People ask, “Why don't you take a cut of all the sales?” Well, we didn’t take a cut of the CryptoPunks, so we want to just be the same as everybody else. We felt that that still was the right way to go with this.

JB: Right. It's experimental and you're along for the ride with the same level of risk as everybody else, right?

JW: Yes, exactly. That informs the sale price for the rest of them and where that money goes, and then we don't feel like we need to claim the sale price of those things. We can donate that because we have a portion of the artwork.

JB: Got it. No, that totally makes sense, and I'll come back to the charity stuff, too. For me, at least, CryptoPunks was sort of stealth generative art, meaning that most people don't know what generative art is, and they didn't need to in order to love Cryptopunks, right? I think part of the appeal of CryptoPunks was that anybody could look at them and get it and fall in love with them, like, “Oh, cool, look at all these different cool characters.”

You also received interest from art nerds like me and you were in that awesome show with theKate Vass Galerie. Are you worried at all that the Autoglyphs may not have the same broad appeal? Or maybe you didn't even assume that there's going to be a broad appeal for CryptoPunks, either, kind of going back to your assumption of these things sort of being experiments?

A collection of CryptoPunks

A collection of CryptoPunks

JW: Yes, I think that's what it is. We didn't expect it with the CryptoPunks; we don't really expect that here. We know people like you and the other people we've met who are into this stuff, and we know that there will be at least a narrow appreciation of this for the same reasons why we dig it. But no, we don't necessarily expect it to have as broad appeal as the CryptoPunks, just because they were a little more consumer-friendly, just easier to engage with, easier to understand. You didn't necessarily need to know that they were generative, you just liked them, like, “I want one that looks like me.” You're not going to find an Autoglyph that looks like you, so…

MH: If you do, that'd be cool!

JB: I like that challenge — that's the first thing I'm going to do when I get off the call.

Autoglyph #130 -Autoglyph I believe most looks like my inner self

Autoglyph #130 -Autoglyph I believe most looks like my inner self

JW: Yeah, it's more of like a Rorschach image.

MH: You see your true self in the Autoglyphs.

JW: Exactly. Yeah, you see your emotional self. We took the attitude, “Let's not worry about that; let's just kind do experiments that we like and we think are cool and resonate with us.” But there's no doubt that we were like, “Let's keep the size small here, because the audience might just be smaller, and that's fine.” It doesn't need to be as big or as wide a variety of people owning it or as high a transaction volume as the CryptoPunks.

JB: Got it.

MH: I think it's fair to say that we're starting to think a little longer term about these things, too, now that we're coming up on two years of the CryptoPunks launch. We thought CryptoPunks might be just a blog post, a couple weeks of interest and the end of it — and it's still going strong. And then seeing this generative art from the '60s and having some similarities with the very limited computing ability we have to work with, it just felt like, “There's cool stuff to explore here that could have appeal long term.” It's okay even if doesn't have the broad appeal at the moment, it's fine.

JB: What are you guys? Do you think of yourselves as artists, and had you in the past, or has that changed in the last few years?

JW: It's funny you ask the “what are you guys” question, because we've been looking at each other the last couple weeks asking the same question. What are we, what are we doing here? We're quite a wide variety of things, and this is one of them.

And obviously it’s almost a loaded term: We're artists now, I guess. And especially looking at generative art from the ‘60s with fresh eyes. There were a lot of people working at Bell Labs and just experimenting and trying things out. Then in hindsight we can look back at that and be like, “Man, that's really cool art that really predates this whole digital art thing.” And they were just engineers, they were nerds just expressing themselves. I think we put ourselves in that camp happily, so not claiming that we're career artists or that's what we’re trying to promote ourselves into, but claiming the ability to express ourselves and make things just like anyone else.

I don't know, Matt. Is that how you feel about it?

MH: Yeah. I felt more comfortable with that term when I found out the history of technicians becoming recognized as artists because they have the skills necessary to operate something new.

JW: And they were thinking about it more than anyone else.

MH: Yeah, just familiar with it, and would see the limitations and the strengths in how they're utilized. So I feel pretty comfortable in that category.

JB: So the CryptoPunks were initially free. Autoglyphs are coming in at like $27.69 with proceeds going to the charity350.org. Could you maybe share a little bit of the thinking behind that? Why 350.org?

Screen Shot 2019-04-07 at 1.59.55 PM.png

MH: Even with the CryptoPunks, where we gave away 9,000 of them, a large number of them went to a few early people that just got on it and automated that process, so we wanted to avoid that. We wanted to a have a better distribution of people, so we felt like the best way to have that was some price associated with generating them .

JW: So then the solution there was, “Let's donate that money to charity,” and then if the whole set sells out, then it will be a pretty good total.

MH:  So if we can sell out of these things it'll be about $10k to 350.org, which is a good organization for trying to move power generation over to renewables. It felt like the right fit in all of those dimensions.

JB: Great, yeah. A softer ball question, so from each of you, what makes a good Autoglyph?

JW: I think with a generator you kind of get a sense of what it makes and then you get surprised by a few things. So I always like the ones that are just like, “Woah, that's not what I expected.” Once you look through 40 or 50 of them, you can always tell which ones are crazy or weird looking, and it’s always fun when it kind of breaks out of expectation. Those are the ones for me that I like. I like ones with diagonal lines. For some reasons, those are the most appealing, ones that are just made out of diagonal lines.

MH: I think we both like the ones where, because the symbol sets are simple, it’s cool when you get the sense that there is a pattern there that's not actually there. There are ones that look like there are curves in them, but there aren't. I like that a lot. I also like ones that look different at different scales. So when they're zoomed out, they look like one thing, and then as you zoom it in, it dissolves. It’s something we're trying to figure now when we're working on physical representations of them, how thick should the lines be, what's the ideal viewing distance, where do these patterns resolve? I think that's my answer.

JB: Cool. And then anything you want to share on the launch process? I think you mention the date in the email, but are there plans to show the physical works anywhere specific?

JW: Yeah. We're just going to launch them first just on the web and on blockchain, and then we'll figure that out next. I think we do want to show a bunch of the glyphs that we claimed for ourselves, maybe one of the art shows in New York in May. We're going to figure out which one's the best one to do that for. We haven't totally figured that out yet. We first just want to put it up, we still want it to be an experiment that pops up on the internet and not have it be a gallery-type launch or anything like that.

JB: Thanks for your time guys! I think Autoglyphs are awesome and can’t wait to add some to the Artnome collection!


3 Comments

Why Is AI Art Copyright So Complicated?

March 27, 2019 Jason Bailey
Left, GANbreeder image by Danielle Baskin. Right, GANbreeder image painted on canvas commissioned by Alexander Reben

Left, GANbreeder image by Danielle Baskin. Right, GANbreeder image painted on canvas commissioned by Alexander Reben

Despite claims that machines and robots are now making art on their own, we are actually seeing the number of humans involved with creating a singular artwork go up, not down, with the introduction of machine learning-based tools.

Claims that AI is creating art on its own and that machines are somehow entitled to copyright for this art are simply naive or overblown, and they cloud real concerns about authorship disputes between humans. The introduction of machine learning as an art tool is ironically increasing human involvement, not decreasing it. Specifically, the number of people who can potentially be credited as coauthors of an artwork has skyrocketed. This is because machine learning tools are typically built on a stack of software solutions, each layer having been designed by individual persons or groups of people, all of whom are potential candidates for authorial credit.

This concept of group authorship that machine learning tools introduces is relatively incompatible with the traditional art market, which prefers singular authorship because that model streamlines sales and supports the concept of the individual artistic genius. Add to that the fact that AI art - and more broadly speaking, generative art - are algorithmic in nature (highly repeatable) and frequently open source (highly shareable), and you have a powder keg of potential authorial and copyright disputes.

The most broadly publicized case of this was the Edmond Belamy work that was sold by the French artist collective Obvious through Christie’s last summer for $432k. I have already explored that case ad nauseum (including an in-depth interview with the collective). I cite it here only to point out that there were a large number of humans that were involved in creating a work that was initially publicized as having been created by a machine.

In this article we look in detail at the recent GANbreeder incident (which we outline below) that has received some attention in the mainstream press. This is another case where the complexity of machine learning has driven up, not down, the number of humans involved with the creation of art and led to a great deal of misunderstanding and hurt feelings.

For this article I spoke with several people involved in the incident:

  • Danielle Baskin, the artist who alleges that Alexander Reben used her and other people’s images from GANbreeder

  • Alexander Reben, the artist accused of using other people’s GANbreeder images

  • Joel Simon, the creator of GANbreeder

I was also lucky enough to speak with Jessica Fjeld, an attorney with the Harvard Cyberlaw Clinic, who has written about and researched issues involving AI-generated art relative to copyright and licensing. She is the first lawyer I have spoken with who truly understands the nuances of law, machine learning, and artistic practice.

The GANbreeder Incident

Danielle Baskin’s GANbreeder Feed including a time stamp for the image in question

Danielle Baskin’s GANbreeder Feed including a time stamp for the image in question

GANbreeder is the brainchild of developer Joel Simon. Simon created a custom interface to Google’s BIGgan so that non-programmers can collaborate on generating surreal images that combine pictorial elements of the user’s choosing to “breed” child images. If you are not sure what GANs (generative adversarial networks) are, you can check out this earlier article we wrote covering the topic.

Let’s look at a super simple GANbreeder example here. I clicked a few buttons in the GANbreeder interface and chose to cross an agaric mushroom with a pug. GANbreeder then outputs 6 images with varying degrees of influence from both the mushroom and the pug. Results below:

Screen Shot 2019-03-22 at 8.23.36 PM.png
Screen Shot 2019-03-22 at 8.24.04 PM.png

You can get more sophisticated and breed many things against each other in combinations, but the tool is dead simple (thanks to Joel Simon’s great design) and literally anyone can use it in seconds without training.

It was Simon’s vision that people would collaborate using GANbreeder and expand the tool through other creative uses. Along those lines and with Simon’s support, conceptual artist Alexander Reben wrote a scraper for GANbreeder that automatically grabbed images and stored them locally to his PC. Once local, Reben applied a custom selection algorithm that would choose images that Reben liked or disliked based on his body signals.

Reben believed the images he scraped from GANbreeder were randomly generated (as he states in this early interview with Engadget). He then sent the images selected via his body signals to a painting service in China where anonymous artists created painted versions on canvas. He called the project amalGAN.

amalGAN, Alexander Reben

amalGAN, Alexander Reben

Reben then shared the painted images widely on social media in support of his upcoming gallery shows. This triggered an avalanche of anger and frustration from other GANbreeder users. They began to complain that Reben had stolen images that they had created using the GANbreeder system.

Screen Shot 2019-03-23 at 10.12.35 AM.png

Reben acknowledged that he did not realize the images were being created by humans. It was his understanding that the images were automatically generated at random by the algorithm.

At the time of my interview, Reben could not confirm that his scraper had not included exact images by other artists, but he believes a tiny percentage (3 out of 28) of his images were subtle variations of works that other artists had created.

The first person to call Reben out on this on Twitter was artist and serial entrepreneur Danielle Baskin. Baskin is a GANbreeder power user who often stayed up until 5:00 a.m. breeding images. She even started a service called GANvas where people could select images on GANbreeder and she would print them on canvas and ship them to customers around the world.

Volcano Dogs, Danielle Baskin on GANbreeder

Volcano Dogs, Danielle Baskin on GANbreeder

When I spoke with Baskin about her experiences with GANbreeder, she was careful to state that she felt she was “discovering” images on GANbreeder vs. “creating” them.

I feel like I am discovering them, not creating them. They all exist; you’re finding them. That is why I view the image as having an intelligent force behind it. It’s like I am discovering someone’s works.

Then why get so upset with Reben for “discovering” a similar image? Baskin explained the source of her frustrations with Reben’s work.

I thought that the whole project was so awful. Like, it was just so bad that it couldn’t have been real, but that it was a statement. Then I learned that it was real and I was like, “F*ck this project.”

Not that it is a competition or something. But he sort of took all the things I had in progress and had been thinking about for a long time and was immediately able to get a gallery show and sell work and stuff. And he didn’t present a clear story as to what he was doing. So that upset me. All these things were on my mind because I was so obsessed with GANbreeder.

It’s like you are writing a history book and you have been researching your subject matter for a year, and someone publishes a history book on the same subject matter, but they barely researched it and were able to sell tons of their books on Amazon. Someone took your content and got all this credit for it, but it wasn’t even good.

The gene sequence Danielle Baskin used to create the disputed image

The gene sequence Danielle Baskin used to create the disputed image

It was clear to me that Baskin was not a fan of Reben’s work. But I wondered if she thought he had done anything malicious or with bad intentions. I also wondered if she felt he had resolved the issue. Project aside, what did she think of him as a person?

When I met him in person, I realized Alex has built an incredible community of artists that use technology and he is a great person. It’s funny because I hate his art, but I like him - but I don’t like him as an artist.

In giving him the benefit of the doubt and in talking with him, I think he genuinely didn’t know how it [GANbreeder] worked. He thought when he refreshed the home page it was totally random images from latent space; he had no idea that other people created the images. He knew the creator of GANbreeder, so maybe he thought that Joel would have explained that to him if it were the case that it was created by other people.

I told Reben that it looked to Baskin and others like he was trying to take shortcuts, or was at least trying to remove himself from the work in some aspects. He partially objected and explained:

There was still a lot of work with me training the data sets on the art that I like and I didn’t like. The real idea was that all of the work was done before the art was made. And the actual art making process was just two simple steps back and forth. Everything involved with that is complicated, involving servers and building computers and learning algorithms and all that sort of stuff.

The interesting thing is that a lot of effort and knowledge came in to the code making. A lot of the creativity was compressed into that code, whereas now that the code is made, it is now a tool for me to… Like I said in one of the reports, I can now lay in a hammock and look at a screen and be able to just use this system to produce output.

I asked him specifically what the amalGAN project was about.

The project was, to me, about human/machine collaboration and how many steps and layers of abstraction I could add. On my website I have like seven steps of human to machine, human to machine, back and forth. I had the idea that the final step is basically the machine giving a human somewhere else the activity of using their brain to upscale the image, using their brain to interpret how to turn pixels into paint. It is basically like the machine using human knowledge to execute rather than being printed out on a printer. To me, that is conceptually interesting because it has to do with that human/machine collaboration.

I then asked Reben how he felt about the issue with Baskin and others in the GANbreeder community and what, if anything, he had done to reconcile it.

I’m sorry this happened out of mostly my ignorance of the system. I probably should have done a bit more research. When I learned there was an issue, I changed my system so it would never happen again. I’m sorry people feel this way. I think I did as much as I could at the time to get permission from Joel and to address as many concerns as I could by inviting people over to discuss. I do have the disclaimer on my website and again in my talks that some images may have come from the GANbreeder community. I have no way to verify that because there are no records of who made what.

I think most reasonable people at this point, including Baskin, acknowledge that it was done unknowingly. However, it could have become more serious - Baskin shared with me that she had considered sending Reben a cease and desist letter.

This exchange of course opens up all kinds of legal questions, and it is here that I believe things actually become interesting. For example:

  • Does Reben have the legal right to use an image that is either similar to or the same as the one that Baskin created in GANbreeder?

  • Would Reben’s work meet the legal definition of a “derivative work”?

  • How much would Reben need to change the image for it to be considered fair use? Is turning it into a painting enough?

  • What if it was the same image, but he used it as a conceptual component instead of as an aesthetic component?

  • Does it matter that Joel Simon’s intention for GANbreeder was for artists to build on each other’s works?

  • As the developer of the interface/tool, does Simon deserve some ownership over the works?

  • What about the folks who created BIGgan or the folks who designed the graphics cards? Do they deserve credit?

To help navigate all of this, I spoke with Jessica Fjeld, the assistant director of the Cyberlaw Clinic at Harvard Law School. I share the majority of our interview because I believe Fjeld does an excellent job of shedding light on an incredibly murky topic. It is my hope that sharing her explanations might help other AI artists from entering into sticky situations around copyright and authorship moving forward.

The Legal Implications - Jessica Fjeld

GANbreeder_Images_Jason_Bailey

Fjeld patiently walked me through several concepts that helped me to better understand how law interacts with the new AI-generated works. Like me, Fjeld believes that all the talk about whether machines deserve copyright is overblown and distracts from real issues surrounding increased complexity of human attribution. Unlike me, she can explain the reasons why and the implications within our legal system. Fjeld explains:

Mostly the question that gets asked is, “Will AIs get smart enough that they can own their own copyright?” To me that is not that interesting because I think AGI (artificial general intelligence) is a ways out. If we get AGIs and we decide to give them legal personhood the way we give to humans and corporations, then yeah, they can have copyright, and if we decide not to do that, then no, they can’t, end of question.

In the meantime, what we really have are sophisticated, interesting tools that raise a bunch of questions because of the humans involved in collaboration making stuff with them. So we get these complicated little knots. But they are not complicated on a grand philosophical level, like, “Can this piece of software own copyright?” They are just complicated on the level of which of these people involved do [own copyright], and what parts of it.

I asked Jessica what the legal implications were in the GANbreeder incident. Disclaimer: Alexander is a past client of Jessica’s, but she is not currently representing him in relation to the GANbreeder incident.

It is a fascinating question. I have tooled around a little bit with GANbreeder myself, so I can understand it. One thing that is important to note is that copyright protects original expressions that are fixed. So “original,” “fixed,” and “expression” are the key terms here.

Something has to be new, and obviously, much of what is on GANbreeder is. Part of what makes it an exciting website is you get some of these really unfamiliar feelings - sometimes eerie, sometimes funny.

Then the next word we learned about is “expression.” Copyright does not protect ideas; it only protects particular expressions of those ideas. So if someone said I had the idea the put into GANbreeder “dog, mountains, and shell,” and I got an image that was similar to the one that someone else is now using, that is not protectable. The exact image, maybe; but a very similar one, no. And something that is very interesting about GANbreeder, as I was tinkering with it, if you have it create a child on the scale from similar to different, if you say to make it very similar, a lot of the children images that come out are very, very similar. There may be individual pixels or a slight shift in the orientation, but at a casual glimpse, you wouldn’t even necessarily see [the difference].

It’s interesting especially because of the timing of when Alex took these images off, when all the works on GANbreeder were unsigned because there were no accounts. It’s a little hard to say. If you were thinking about pursuing an infringement case, you would really have to prove the exact image had been copied rather than a similar idea where, say, one is orange and one is red.

I asked Fjeld how different a work had to be to be considered original.

In GANbreeder, if you keep making tiny changes, eventually you are going to get something that does have what we would call “originality” in copyright. But it is really hard to say when that happens. And in a lawsuit, it will just be a fact-specific inquiry: Is this the same or is it not? And we have this concept of derivative works for works that are very similar. It can be an infringement to make something that is extraordinarily similar, but not just a mere reproduction.

I asked Fjeld if it mattered that Joel Simon’s intention for GANbreeder users was to build upon each other’s existing works. Wasn’t Reben simply using the tool as intended? It turns out there is a thing called an “implied license.” Fjeld explains:

The other piece around how GANbreeder encourages folks to draw on other people’s work I think brings up another interesting question, which is that it’s largely settled law, particularly in the Ninth Circuit in the U.S., that you can grant a non-exclusive license to use your work in an implied way, so it doesn’t have to be explicit.

U.S. copyright law does require that if you are going to dispose of your right to the work - so either going to give an exclusive license to someone else, or if you are going to sell your copyright - you have to have a writing. But for implied license, a non-implicit license, you don’t have to have a writing. And at least some courts upheld that it can just be implied - you don’t even have to have a conversation about it.

And when I look at GANbreeder, because of the way it’s set up, because of the way the whole system is architected, it gives you an image created by someone else and encourages you to iterate on it. It certainly looks to me like there is an implied license to do that within the context of the site. Anyone who is creating work there understands that other people are going to use it as a basis to make their own work.

Now, when courts look for implied licenses, it is again a fact-specific inquiry. I think with regard to what Alex did, the question is, did people understand that part of the implied license they were given, not just that you can monkey around with it in the context of the GANbreeder app or you can also integrate it into this other system and have it painted by anonymous painters in China and show it in a gallery. They might not have anticipated that, and that’s probably where the issue comes in.

There was an implied license to do something, but the scope of that implied license wasn’t totally clear. Then that is complicated because it is a site that is architected with a thousand models and images in it, so you are essentially navigating the points in a multi-dimensional space created by that number of models and can have any combination of those thousand images. But it creates a lot of very similar images.

So the combination of the fact that the scope of the implied license wasn’t very clear and the fact that people may have an attachment to their ideas or individual expressions and then may see a very similar one… it is my understanding that Alex’s project shouldn’t have directly just reproduced anyone else’s; it would have started with someone else’s, and then he tweaked it based on his body signals.

I wondered why Reben’s work would not be considered derivative and asked Fjeld if she thought it could legally be considered so.

I would say that yes, there is an argument that Alex’s works could be considered derivative of existing works on the GANbreeder website. There remains the question of the implied license because the derivative work is a copyright infringement, but if the use is licensed, then there is no infringement.

There is also a question of what the damages would actually be, because in copyright, you can get statutory damages if you register your work in a narrow window around its creation or before the infringement happens. If you don’t do that - and to my knowledge, none of the GANbreeder images have been registered - then what you get is actual damages. And it’s not totally clear what the damages would be for folks that anonymously created images on a website and then later found that someone had them painted and displayed them in a gallery.

*I also don’t know if there have been any sales. There is the image that Alex used and whether there is a derivative work in that process, and then he takes this further step and has them painted into oil paintings, which, again, I think is another tweak. So there is a series of manipulations of the underlying content.

*Note: There have not been any sales.

I asked Jessica if she thought these “manipulations” by Reben pointed towards “fair use” (a term I had heard in the past but did not fully understand).

Yes, they do steer me more to think about fair use. As I have heard Alex presenting on this work, he really emphasized that for him, it really isn’t about the outputs; they are not the artwork at all. For him, the artwork is the process by which he had trained this series of systems to produce the artwork, test them against his own preferences, to title them, etc. For him, the interesting thing is the process by which he tried to design a bunch of algorithms to take himself as far as possible out of the creation process. The expression of them is that he ends up putting his name on painted images in a gallery. But even putting his name on them is a little complicated in regards of what he was thinking about in regards to the artwork.

When we think about fair use, one of the main factors that courts consider is how transformative the use is. And I do think there is a strong argument here that because the underlying theme of the work, we could think about it as fair use because we want to incentivize this kind of exploration of the space. The way that Alex talks about it, there is an argument that, ethically speaking, it should be clearer that despite the paintings being up at a show with his name on them, he doesn’t really think of himself as the author of them in a certain way. The use is transformative because it is making this point of how far can we push toward algorithmic authorship.

You could think about the Richard Prince vs. Patrick Cariou case. They are both fine art photographers, but Prince is a conceptual artist, an “appropriation artist,” he calls himself, and Cariou is a more traditional fine art photographer.

left, one of Patrick Cariou’s photographs of Rastafarian’s, and at right, a painting from Prince’s ‘Canal Zone’ series

left, one of Patrick Cariou’s photographs of Rastafarian’s, and at right, a painting from Prince’s ‘Canal Zone’ series

Cariou had gone to Jamaica and taken this book of photographs and done a gallery show all of Rastas. He spent all of this time investing in these relationships to produce these images. Prince used them in a gallery show in which he manipulated them a little bit. One of the classic images that gets shown a lot is a full image of a guy hanging out in a jungle setting, and Prince very roughly cut out an electric guitar and pasted it in on top of him. A lot of the original image was still there with this crude-looking addition on top. And Prince won that case as fair use because the argument he made is that he had transformed the content. Yes, Cariou was also a photographer that had a gallery show, but Prince was using it in this conceptual, imaginary space. I think you could think of Alex’s work in a similar way.

Fjeld shared my belief that the driving force behind some of the confusion in AI-generated art was that more people, not fewer, are typically involved. We talked a bit about the developer behind GANbreeder, Joel Simon, and what rights he had, if any, to the works.

In GANbreeder you can click a button and it’s possible to get the coolest thing that GANbreeder ever produced. And how much do we want to think it is in line with the goals of copyright if someone is just clicking a button and the software is producing it… how much do we want the person to click the button to be the person to get the rights? Do we think that Joel, who set up the system, gets some rights? Do we think the people who put in the work to create these models that took thousands and thousands of hours of computing power should get some rights?

There used to be a doctrine in copyright called “sweat of the brow” where courts had an instinct that they wanted to protect people’s investment of time, and that has been rejected. So the notion that people who spent time to create the model should earn rights in the outcomes isn’t the state of copyright in the U.S. right now. But there is something in there that ethically feels to us like if you just click a button once, you are involved in that creation, but maybe you shouldn’t be the person who gets all the rights.

I found Fjeld’s explanations both fascinating and much needed in this space. It was a welcome reprieve to hear a lawyer talk about these issues that we keep seeing coming up in the AI art space without over-focusing on the red herring of whether the machine deserves copyright.

Conclusion

Regardless of what the law says, we all answer to the court of public opinion, and it hasn’t been particularly kind to Alex Reben over the GANbreeder incident. I think the animosity towards Reben stems from folks not liking that he appears on the surface to be doing less work than other artists, yet getting more attention. A common complaint waged against conceptual artists. But more importantly, I think people can see with their own eyes that at least one of his works looks the exact same as an image created by Danielle Baskin, and a few others are similar to images made by other members of the GANbreeder community.

I like Alex and consider him a friend. I also like Danielle and plan on following her work moving forward. So I thought back to what I learned from Jessica Fjeld about it being important that Alex’s work not be the exact same as Danielle’s. This seemed like a pretty easy thing to figure out, so I compared the two images using James Cryer‘s excellent tool called Resemble.js, which can compare two images and highlight the differences.

GANbreeder image claimed by Alexander Reben

GANbreeder image claimed by Alexander Reben

GANbreeder image claimed by Danielle Baskin

GANbreeder image claimed by Danielle Baskin

Analysis of both images from Reben and Baskin highlighting lack of differences using Resemble.js

Analysis of both images from Reben and Baskin highlighting lack of differences using Resemble.js

Other than a little bit of aliasing (I took a lower-resolution screenshot of Baskin’s image), they look the exact same to me. I shared my new findings with Alex and asked if he would consider removing the image from his website in light of the new evidence. He did one better and called Baskin to discuss the best way to move forward. Reben then crafted the following statement, which he first ran by Baskin for approval.

I spoke to Danielle by phone to work out what she thought would be fair for me to do to move past this issue, given all the information we have at this time. We landed on giving her a credit under the artwork on my website as "Original GANnbreeder image sourced from Danielle Baskin" and to make the credit for GANbreeder more obvious on the page. If any other images happen to arise with a similar issue, I'll have to deal with them on a case-by-case basis. But since the images from the website at that time have no authorship information and may be randomly generated, there may be no other issues apart from the few which were already identified. I'm also only concerned with images which are basically the same, not images which are similar and "bred" from a like set of "seed words," as this use aligns with the spirit of the website. Of primary concern to both of us was to put this issue to rest so that GANbreeder can continue to be used as a creative tool and grow from what was learned.

Score one for the court of public opinion.

As always, if you have questions or ideas you can reach me at jason@artnome.com.

Sign up for the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Blockchain Art 3.0 - How to Launch Your Own Blockchain Art Marketplace

February 27, 2019 Jason Bailey
Warhol Thinks Pop, Hackatao - 2018

Warhol Thinks Pop, Hackatao - 2018

The most frequent question we are asked at Artnome is, “How can I get my artwork on to the blockchain?” Finally, with the development of what I am calling blockchain art 3.0, we are seeing the new tools that enable artists to tokenize their own art and sell it on their own marketplace.

In this article I am going to show you how I set up a blockchain-based marketplace in less than an hour without coding. But before we dive into a tutorial on how to use these new applications and speak with the teams behind them, we first look at the evolution that led up to this point.

If you are eager to just learn about the new applications, you can skip the history and go to the part that describes these new offerings and how you can use them to create your own tokenized artwork on both the Bitcoin and Ethereum blockchains.

How We Got To Blockchain Art 3.0

Joe Looney presenting the Rare Pepe Wallet at RareAF, NYC, 2018

Joe Looney presenting the Rare Pepe Wallet at RareAF, NYC, 2018

Before we jump in to blockchain art 3.0, let’s take a look at the evolution of blockchain art that got us to where we are today. While people have been making art “about” the blockchain since its inception, I consider blockchain art 1.0 the period where folks first started exploring “digital scarcity” and gave birth to the idea of selling art on the blockchain.

A big problem with producing and selling digital art is how easily it can be duplicated and pirated. Popular opinion is that once something is copied and replicated for free, the value drops and the prospect of a market disappears. Most collectors feel that for art to have value, it needs to have measurable and provable scarcity.

Blockchain helps solve this for digital artists by introducing the idea of "digital scarcity":  issuing a limited number of copies of an artwork, each associated with a unique token issued on the blockchain. This provable scarcity is the same concept that enables tokens like Bitcoin and Ethereum to function as currency.

Blockchain art 1.0 was the Wild West, as there was no real blueprint yet for artists and technologists to work from. Though often overlooked by the mainstream media, there is no question for me that blockchain art 1.0 started with Joe Looney’s Rare Pepe Wallet. You can read my in-depth description of Rare Pepe Wallet in this earlier post, but for now all you need to know is that Rare Pepe Wallet pioneered the possibilities of buying, selling, trading, editioning, gifting, and destroying digital artworks on the blockchain. Joe and the Rare Pepe community not only conceived of the first such market, they were to first to prove it could work at scale, selling over $1.2 million worth of digital art.

Smooth Haired Pepe, 1/1000

Smooth Haired Pepe, 1/1000

On the immediate heels of the success of Rare Pepe Wallet, we saw the development of several other experimental projects, each trying new things. Crypto Punks, Dada.NYC, and CurioCards were all very different from each other (and from Rare Pepe Wallet), as no real template for how art on the blockchain should work had fully been established. It is noteworthy that all of these blockchain 1.0 solutions were driven by the “decentralized” ethos, developed more as creative communities than by a real business model for making money. For me, this is the Golden Age of blockchain art - the era that attracted the OGs, the weirdos (said lovingly), and the truly creative mavericks in the space who were motivated more by creative experimentation than any obvious financial benefit.

London Tacos, from left: Matt Hall (CryptoPunks), John Zettler (Rare Art Labs), Judy Mam (Dada.nyc), John Crain (SuperRare), Charlie Crain (SuperRare), Jon Perkins (SuperRare), Bea Ramos (Dada.nyc)

London Tacos, from left: Matt Hall (CryptoPunks), John Zettler (Rare Art Labs), Judy Mam (Dada.nyc), John Crain (SuperRare), Charlie Crain (SuperRare), Jon Perkins (SuperRare), Bea Ramos (Dada.nyc)

Blockchain art 2.0 started after CryptoKitties exploded and people saw that there was actually an opportunity to make money with digital art on the blockchain. A half dozen or so blockchain art marketplace startups launched with fairly similar functionality to one another. They were almost all based on Ethereum, featured slick professional interfaces, and streamlined the tokenization of art.

These 2.0 marketplaces were run more like businesses than the experimental grassroots community projects from the blockchain 1.0 days. They often have investors, legal advisors, advertising budgets, and corporate titles within their organizations. However, it seems to be the ones that are capable of building communities and providing a collector base to lesser known artists through unified marketing that are the most successful. Blockchain 2.0 offerings include SuperRare, KnownOrigin, Portion, RareArtLabs, and DigitalObjects. In blockchain 2.0, the offerings were similar enough on the surface that it was really the artists these startups were able to attract that separated them from one another more so than the tech.

Example of a clean Blockchain Art 2.0 interface (SuperRare) which contrast with the DIY 1.0 UX/UI

Example of a clean Blockchain Art 2.0 interface (SuperRare) which contrast with the DIY 1.0 UX/UI

There were several approaches to recruiting artists. The most successful seemed to be dropping gallery commissions for primary sales and adding a commission for artists on the secondary market (SuperRare), and consistent grassroots recruiting (Known Origin). Recruiting collectors, on the other hand, has proven a bit more difficult, as the launch of these blockchain art 2.0 offerings has coincided with the decline of the cryptocurrency markets. Most people have either temporarily jumped out of the cyptocurrency market to stop the bleeding or they are holding on, waiting for the market to recover before spending their currency.

That said, there are certainly more than a handful of highly dedicated cryptoart collectors, and the artists themselves have formed a tight community and frequently collect each other’s work. This is not unlike the behavior we have seen between artists in movements of the past, where bartering and trading artworks was common among them.

Launching a Blockchain Art Marketplace

Some early user created markets from the Pixura platform

Some early user created markets from the Pixura platform

Throughout the development of blockchain art 1.0 and 2.0, many artists have wanted to tokenize their own work and offer it on a blockchain market where they can control the look, messaging, and user experience. This makes sense, as artists work hard to brand their own image, and the promise of the blockchain was supposed to be that they could sell their own work without having a middleman or intermediary.

Until now I pointed artists to a half dozen marketplaces where the artist had no control over how their work was displayed and which artists their work was displayed next to. While some were fine with this and embraced the experiment, the lack of artistic control was a deal breaker for many other artists who cared greatly about the context in which their work was shown. As an example, if an artist is producing what they consider to be serious generative artworks, they may not want their work sandwiched randomly between floral still lifes and a thousand digital images of Bitcoin/Ethereum symbols. For many artists, this type of context matters a great deal.

With blockchain art 3.0, artists can take control over the entire process aided by tools that make it easy to tokenize artwork and do not require coding skills or technical knowledge. I cover two such tools in this article:

  • Pixura Platform (beta) - Allows anyone to issue and sell virtual items on the Ethereum blockchain, including art and rare digital collectibles

  • Freeport.io (pre-alpha) - Allows people to collect, create, and trade cryptogood assets issued on Counterparty using the Bitcoin blockchain

Far from competing with each other, we are lucky to have two complementary solutions that function on the Ethereum and Bitcoin blockchains respectively. What makes these two tools stand out for me is that both were developed by people who have already experienced success at scale in building their own active blockchain-based art marketplaces. It also helps that I know the developers behind both solutions personally, think highly of them, and am comfortable recommending their tools.

If you favor Ethereum and you are looking for something you can use starting today, Pixura is for you. They do charge fees, which may not scale as well for some use cases, but with those fees comes the support from a responsive team of experts working on the project full time.

If you prefer Bitcoin, want to pay zero fees, or want/need a completely open-source solution, FreePort.io is for you. There is some functionality already built in to Freeport, but it might be another month or so before it is fully functional - so that is another aspect to take into consideration.

We go into more details below. Feel free to jump to the solution you think might best apply to you.

Pixura - Tokenize Art and Launch a Blockchain Art Marketplace With Ethereum

The Pixura Platform is the same team and codebase behind the SuperRare marketplace. They have been fast to launch, eager to solicit user feedback, and quick to add meaningful features. As a result, SuperRare is among the fastest-growing platforms from the blockchain art 2.0 era, and artists have earned roughly $100K in the first year of the platform’s existence.

According to a recent interview with Pixura/SuperRare CPO Jon Perkins:

Pixura is a wide open platform – anyone can launch a smart contract and create their own NFTs without writing any code. We’ve already seen a bunch of interesting projects get created in one week, and I expect to see hundreds more by the end of the year. We are also working on some exciting collaborative partnerships, which will be announced later in the year.

I decided to launch my own marketplace on Pixura to see how easy/difficult it would be. I was pleasantly surprised - the entire operation from start to finish took less than an hour (including visual customization) and cost me under $30.

I put together this short tutorial to walk you through the process. The tutorial assumes you already have a MetaMask wallet account and at least $27 in Ethereum in your wallet.

Here are the steps to launch your own blockchain art marketplace on Pixura:

  • First go to the Pixura mainnet link: https://platform.pixura.io/

Screen Shot 2019-02-26 at 10.40.34 AM.png
  • Then click on the “Launch a Collection” button

Pix_1.jpg
  • Choose “Ethereum Mainnet” to launch a functioning marketplace

Pix_2.jpg
  • Sign in to Pixura via your Gmail account

Pix_0.jpg
  • Connect to MetaMask

Pix_3.jpg
  • Launch your smart contract

Pix_4.jpg
  • Pay the $25 fee (plus gas) to launch your marketplace

Pix_5.jpg
  • Confirm smart contract deployment

Pix_6.jpg
  • You can check Etherscan for the transaction details

Pix_7.jpg
  • Click on your project (on the right side of the screen)

Pix_8.jpg
  • Click on “Add New Collectible”

Pix_9.png
  • Name your collectible and add an image

Pix_10.jpg
  • Add as much custom metadata as you like (this is a nice feature)

Screen Shot 2019-02-26 at 11.07.41 AM.png
  • Price and launch your collectible

Screen Shot 2019-02-26 at 11.07.31 AM.png
  • Customize the look of your marketplace

You can see the results of my marketplace here. I have a bunch of ideas for what I actually want to do with my marketplace, but it is just a couple of test images for now.

Hopefully you found the Pixura interface for creating a marketplace to be as user friendly as I did. I think its simplicity is its strong point. I am also a big fan of the ability to add new properties, and I know that the Pixura team provides great tech support.

While I like that Pixura gives me more branding autonomy than putting my work directly into SuperRare, there is still a sense that my marketplace is one of many marketplaces within Pixura. This is similar to how I might have my own Etsy shop, but it still lives next to all the other Etsy shops on the Etsy parent site. In some ways this is a plus because people coming to see other Pixura marketplaces have a higher likelihood of stumbling onto my marketplace.

But what if I want a marketplace with 100% branding control where nobody else’s logo shows up and I am not clustered with other marketplaces? Pixura assured me a feature to run a completely white labeled version is on the road map in the near future. But there are some other options as well.

If you are a little more technical, looking for complete autonomy from branding, want to avoid paying any fees, prefer an open-source solution, and can get by without a lot of tech support, then you may want to wait a month or so to explore Freeport as an option.

Freeport.io - Tokenize Art and Launch a Blockchain Art Marketplace With Bitcoin

Screen Shot 2019-02-25 at 3.03.51 PM.png

At the time of this writing, Freeport is pre-alpha and has about a month to go before it will be ready for marketplace creation. I decided to include it anyway because it offers a really nice counterpart to the Pixura platform.

Freeport is the brainchild of Joe Looney, the developer behind Rare Pepe Wallet. Joe is creating Freeport as a completely open-source solution (MIT License), so if you are technical, you can use all the code to do whatever want with it. But Freeport is specifically designed with less technical people in mind. With just a little Bitcoin, from a single interface you will be able to:

  • Create your asset (CounterParty)

  • Upload your art (Imgur)

  • Attach it to the asset (CounterParty)

  • Search a directory (DigiRare.com)

  • Put orders up to sell through the DEX (decentralized exchanged)

Joe has brilliantly structured Freeport to use several existing best-in-class, off-the-shelf solutions, including Imgur, CounterParty, and DigiRare. These decisions were born out of necessity to simplify maintenance and upkeep (Joe is building Freeport for fun in his free time), but this strategy may turn out to be Freeport’s greatest strength. As Joe puts it:

The Bitcoin blockchain is good at creating scarce digital assets (via Counterparty) and then allowing the uncensorable transfer of them. It is not good for storing images. Even with IPFS, any projects utilizing it are generally running their IPFS node for storage. The only way to guarantee that images stored via IPFS are available is to maintain a node and host them yourself, and at that point what are you really even doing? With Freeport, as the developer I don’t need to run any additional software to host images because an image hosting service (Imgur initially) will be hosting them for me. My plan is to also include options to use other hosting services and eventually allow artists to specify custom image locations.

Since I am building Freeport in my free time, I don’t want the responsibility of curating questionable content. One of the problems with something like IPFS or self-hosted storage is that you, the developer, maintain that responsibility. To eliminate that additional work, I’ve leveraged a hosted storage that has its own code of conduct. It also demonstrates that “decentralized storage” is a fun thing to have, but it’s not absolutely necessary. Immutability is achieved by including a hash of the image as well as the image location (Imgur URL) as part of the asset information stored on the Bitcoin blockchain via Counterparty. If Imgur were to become unavailable, the artist has the ability to update the image location, however the hash remains unchained. This means if the artist changes the contents of the image, it is obvious from the record that it’s not the original. Imgur is great at providing the means for everyone to see the image initially and the foreseeable future. However, over time, it becomes the responsibility of the issuer and asset holders to retain the image themselves.

Looney also takes advantage of CounterParty on the back end for token issuance and Bitcorns creator Dan Anderson’s excellent DigiRare site which is designed to provide a directory to view all art and collectibles on the Bitcoin blockchain.

While Freeport is still a few weeks off from launching, you can install the beta as a Chrome browser extension and be among the first to use it when it is ready for prime time.

To install Freeport.io:

  • Download the Chrome extension

  • Go to chrome://extensions/ in your Chrome browser.

  • Make sure “Developer Mode” is selected and click on "Load Unpacked"

  • Select the directory "Chrome Extension"

Be sure to follow Looney on Twitter at @wasthatawolf  for updates on the additional functionality in Freeport as it becomes available.

Summary

Hopefully you found this article/tutorial helpful and you are off to the races building your own marketplace and tokenizing your own art and collectibles. I don’t think you can really do wrong by going with either Pixura or Freeport. Hopefully I have outlined the differences between the two enough that you know which one is right for you. Here is a quick summary:

  • Availability

    • Pixura is live and you can launch a marketplace today

    • Freeport is in alpha and will be ready in roughly a month

  • Blockchain

    • Pixura lives on the Ethereum blockchain

    • Freeport lives on the Bitcoin blockchain

  • Support

    • Pixura provides support to paying customers

    • Freeport: Joe provides support when he can (this is his side project)

  • Fees

    • Pixura charges $25 to launch a market, $1 to launch a collectible, and takes a 3% fee for all transactions on your marketplace

    • Freeport is a community project with zero fees

  • Architecture

    • Pixura utilizes the same proprietary code used on SuperRare

    • Freeport leverages a combination of solutions (Bitcoin, CounterParty, DigiRare, Imgur) and is open source under the MIT license

Conclusion

It is a really exciting time for those of us that have been following the development of art and collectibles on the blockchain. You no longer need to understand the complexities of writing your own smart contracts to launch your own art digital collectibles marketplace, and that should be huge in driving mainstream adoption for creators.

However, I believe the next big problem is going to be growing the number of collectors. One of the great advantages of participating in a marketplace like SuperRare as an artist is they do all the marketing for you. I think some artists may realize that putting their art “on the blockchain” does not necessarily translate to more sales. You still need to find someone interested in buying/collecting your work. And the number of people who know how to buy art using cryptocurrency is even smaller than the number of people who know how to buy art with fiat (regular currency). An increase in the number and variety of digitally scarce objects we can collect could bring in new collectors to the market, but it could also flood the market and reduce demand.

I’m optimistic that an increase in “scarce digital goods” in the gaming market could help drive adoption and understanding for the blockchain art market as well. At least in the short term, I think we’ll see a spike as people explore these new tools and innovate in ways that nobody has thought of yet. And hopefully we’ll see a bit more of the weird blockchain 1.0 spirit come back to the community.

Thanks for reading, as always if you have questions or ideas you can reach out to me directly at jason@artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
2 Comments

AI Artist Robbie Barrat and Painter Ronan Barrot Collaborate on “Infinite Skulls”

February 6, 2019 Jason Bailey
Ronan-Robbie-27-22-cm_2.JPG

It is early in the year, but the most compelling show for art and tech in 2019 may already be happening. AI artist and Artnome favorite Robbie Barrat has teamed up with renowned French painter Ronan Barrot for a fascinating show that lives somewhere in the margin between collaboration and confrontation.

The L'Avant Galerie Vossen emailed Robbie last April after seeing his AI nude portraits and asked if he would be willing to fly out to Paris to work with Ronan. Robbie agreed and flew out last July to meet with Ronan, and the two have been working together ever since. The show titled “BARRAT/BARROT: Infinite Skulls“ opens Thursday, February 7th, and literally features an “infinite” number of skulls.

Robbie-22-27-cm_3.JPG

Why Skulls?

For the last two decades, it has been artist Ronan Barrot’s tradition to use the remaining paint on his palette to paint a skull each time he stops, interrupts, or finishes a painting. As it was explained to me, the skulls are like a side process of the main painting, it’s like when you clean out your motor after driving for miles and miles. Ronan now estimates that he has painted a few thousand of these, and this massive visual data set of painted skulls was perfect for AI artist Robbie Barrat to use in training his GANs (generative adversarial networks).

GANs are comprised of two neural networks, which are essentially programs designed to think like a human brain. In our case, we can think of these neural networks as being like two people: first, a "generator," whom we will think of as an art forger, and second, as a "discriminator," whom we will think of as the art critic. Now imagine that we gave the art forger a book with 500 skulls painted by Ronan as training material that he could use to create a forgery to fool the critic. If the forger looked at only three or four of Ronan’s paintings, he may not be very good at making a forgery, and the critic would likely figure out the forgery pretty quickly. But after looking at enough of the paintings and trying over and over again, the forger may actually start producing paintings good enough to fool the critic, right? This is precisely what happens with GANs in AI art.

Ronan-Robbie-27-22-cm_4.JPG

In Robbie’s words:

I trained the network on the skulls. They are all the same shape, the same size, the same orientation, and they are all looking the same way. The results were good, but they were very similar to Ronan’s original skulls. We have the show chopped up into different epochs, and that is Epoch One, training directly on his skulls.

For Epoch Two I thought about how the coolest part about using GANs is that your getting a weird machine viewpoint of artwork.  But feeding in all the skulls with the same layout is sort of like you are telling the machine how to look at the paintings. You’re giving it a very fixed perspective and a very normal perspective that we have already seen before.

So for Epoch Two, I basically played around with feeding the machine the skulls completely independent of any rotation or perspective, so the machine sees skulls that are all flipped around and stretched out. I’m using the same model, but the number of skulls in the training set jumped from 500 to 17,000 skulls. And the results are really, really good. It makes these really strange images that you would never expect. You can tell that they are skulls, but they really are not familiar. Ronan really loves those. He really likes to correct some of the skulls. He’ll say something like, ‘I like this one but it’s not right,’ or ‘There is never an image I am completely satisfied with,’ so he corrects it. He also does interpretations of them.

I also think that the Epoch Two skulls raise very interesting questions about authorship - since the network has learned exclusively from Ronan, but the outputs don't strongly resemble his work.

I asked Robbie about Ronan’s initial reaction to his work and how the relationship played out.

We are like opposites. He does not like the fact that my work is digital. He said the pixel is sad. And he really was skeptical about it. And right after I visited Paris, he was a little bit hesitant if he wanted to do the show because the French painters have the conception of technology and capitalism being the enemy. But now he is really excited about the show. But I think what is important to remember is that this is more like a confrontation than a collaboration. There are collaborative parts of it, but we really are sort of at odds.

Ronan explained to me that at first, he could not see where Robbie was making any decisions in the AI process. Like many, he thought that the “AI” and the “machine” was doing all the work and making all the choices. But quickly after working with Robbie and seeing that there is “choice and desire” in his work, he decided “the pixel is no longer sad.” But adds Ronan:

Of course it is not the same, I am not expecting the same thing from AI as I am from a painting. Both worlds are contiguous, but not the same. They are not the same rules. I hate the very idea of naturalism. As if everything was equivalent to everything else. I love the idea that there are two sets of rules, which allow us to play differently.

RonanCorrection-on-Robbie_3.JPG

Ronan also pointed out that he does not keep all of his skull paintings. He curates them and many times he paints over the ones he does not like. He sees this curation process as not entirely unlike Robbie’s process of choosing which of the AI skulls to keep from the nearly unlimited number he can produce using GANs.

While the two have come to understand and respect each other’s working methods, there is a lot of interesting dialogue between them on what is an actual painting vs. something that is just an image of a painting. According to Ronan:

There is always difference between a painting and an image of a painting. And now [using GANs] there is an image of a painting that does not exist.

Sometimes I dream about the painting I want to do, and when I have done it, it is completely different. This indicates the direction, but you have to make your own way. And that is why the paintings will be presented as one by Ronan and then one by Robbie. Because then they become a mirror. And the question is, who is mirroring who? Originally they were skulls, but they become real vanities because of this idea of the mirror. With traditional vanities there is always a skull in the mirror which gives you the idea of time passing. Originally when I showed my skulls, each one was a painting on its own. But when paired with the works by Robbie, it creates a kind of double.

Simon Renard de St. André, Vanitas. Unknown.

Simon Renard de St. André, Vanitas. Unknown.

Interestingly, Robbie agrees with Ronan that the individual images being produced by the GAN are just images of paintings (and in some cases, images of paintings that do not exist). But Robbie adds that he sees the trained GAN itself as the artwork. According to Robbie:

Ronan is right when he says that the AI skulls are "images of artwork" instead of artworks themselves. In my opinion, the actual artwork is the trained GAN itself, and the outputs are really just fragments or little glimpses of that (the trained GAN is almost just a compressed version of all the possible AI skulls).

Ronan-Robbie-27-22-cm_3.JPG

Robbie often compares his process of working with GANs to that of the artist Sol LeWitt who is famous for writing out “rule cards” or algorithms for humans to execute to create his drawings.

Sol LeWitt Rule Card

Sol LeWitt Rule Card

Robbie explains:

The Sol LeWitt metaphor applies in multiple ways in GAN art. The data set is like the rule card, with rules created through curation - and the network interprets these to make art. But additionally, the network itself is also like the rule card, and the individual generations are just different interpretations/executions of those rules. This is in line with the idea that the individual works are just "tokens" of something larger - they're shadows of the network, the actual artwork.

At the same time, if the network itself is the piece of art, it's a very strange one, since it cannot be viewed or comprehended entirely (unlike the set of rules responsible for traditional generative artworks). We can only get small glimpses of it at a time. I'm not aware of any other type of art where this is true.

Robbie-Ronan-27-22-cm_1.JPG

I have a lot of admiration for Ronan and his work - it seems almost unfair to Ronan to compare his work to the "images of artwork" output by the network. There's something present in the process of a traditional painter that I feel I'm missing as an artist - I'm not sure if it's dedication, rigor, the use of simple tools and not some complex machine, or something else entirely. Without being overly dramatic, there is something very honorable about how a very traditional painter operates; especially today when everything else is surrounded by technology. In short, I think that if I had to choose between the two types of skulls regardless of process or context, I would choose Ronan's skulls as my favorite. At the same time the Epoch Two AI skulls raise so many questions that I'm interested in - so including process/context, I'm more interested in them.

I’m an artist, I make work. But I am not the best at art history, I don’t have any traditional training, I don’t know how to paint or sketch or anything like that. I definitely do sympathize with Ronan’s view of digital work. Maybe he has seen a lot of low-quality digital work or he just doesn’t like the medium. It makes me wish that I was better at non-digital art.

I asked Ronan if he sees Robbie’s work as art or as inspiration for art.

Robbie introduced his own decisions and desires and changed the training images and the algorithms to make the work closer or further from the work I have done so far. It’s always interesting to bring something from outside the box into the realm of art. In the beginning, that can be seen as a threat. But in the end, it helps whatever is going on. If there is choice, if you can dream a little, it’s art. The skulls lend themselves well to AI and art because of the idea of the vanity of death. They therefore remain in ambiguity. And it is a disturbing ambiguity, the uncanny. Some will say it is about death and some will say it is about whatever, but I like maintaining this ambiguity in art. In the beginning I was worried that it was not possible to be free with AI. You can never say “that is not art, it is only a tool.” You have to find how to be free every time.

000035.JPG

I asked Robbie how he finds this “freedom” in GANs and what makes good GAN art. He shared:

I really don’t like work that relies too heavily on the medium, like a watercolor painting where the whole interesting thing about the painting is that it is a watercolor and it relies on watercolor effects. My mom always called those “medium turds” or “watercolor turds.” I think the same applies to GANs where if it is reliant on the medium and the medium is the cool thing, then that’s not really art - it’s more like a tech demo. I think that the people that are making really cool work with GANs are using it in ways that are not obvious.

For example, in the show we have a box with a peephole in it, and when you look in, it will generate a skull and it will display it for like five seconds and then it will add an input vector to the “do not use list.” So basically you are going to be the only person to ever see that skull… ever. I think that is cool because it’s different and it’s new and it’s not too reliant on the GAN just being a GAN.

You Can’t Hand Someone an Apple and Call Yourself a Chef.

RonanCorrection-on-Robbie_2.JPG

Not only is the artwork from Infinite Skulls of higher quality than anything I have seen from AI so far, the confrontation between the two artists and the resulting work forged through their conflict are the perfect visual symbol for the clash between AI and the traditional world at large.

I rarely anthropomorphize artificial intelligence and machine learning and prefer to think of these new technologies as augmenting human capabilities rather than replacing humans. But others have pushed me, asking, “Who is augmenting who?” in the relationship between AI and artists. If the relationships between AI and humans is symbiotic, then who is the host and who is the parasite? Though it may sound harsh, I think it is natural that people should ask themselves a similar question of the relationship between Ronan and Robbie, even if there is no clear answer.

000085.JPG

While the two artists end up getting along and respecting each other’s methods in the end, each has to see the other as fuel or a raw material or ingredient to consume for their own artistic self-preservation. In both cases the artists are actively consuming the others work into their own as an ingredient, which is a different relationship than mere inspiration.

Ronan frames Robbie’s work as “photos of paintings that do not exist yet”, ostensibly because he himself has yet to create them, emphasizing that he is not happy with any of the works Robbie’s GAN produces until he “corrects” them. Note that Ronan also called Robbie and his AI “a guest in the studio” several times during our interview, which suggests a more passive role than an that of an equal in artistic collaboration. To further explain this relationship, Ronan explains, “It was like having a new guest in a jazz club,” again casting Robbie as a guest, or a “muse”, and not as a member of the band on the stage.

Similarly, Robbie has to treat Ronan’s 500 hundred skull paintings like unrefined wheat, grinding them down and further refining them to sufficiently anonymize them. He writes a program to randomize Ronan’s painting by stretching and flipping them to generate a less recognizable set of 17K training images from the initial 500 works before he can create art that is sufficiently different from Ronan’s to call it his own. Both must make a sacrifice of the other to produce their own work.

Bugs-Bunny_SHort.gif

Ronan is rightfully proud to have painted two thousand skulls in the last two decades, but Robbie and his GAN can produce billions of skulls seemingly overnight, transforming Ronan into a sympathetic, man vs. machine, John Henry-like character.

It’s tempting to cast the story as two artists who overcome their many differences (age, language, tools) and some initial friction to collaborate on works that are as much by one as by the other. But to ignore the dynamic tension between the two artists is to miss much of what is interesting in the work. It is fitting that they landed on the theme of the skulls as vanities (traditional artworks designed to remind us of our own mortality) as it serves as an excellent thematic umbrella. After all, we all eventually return to the soil, only to become the ingredients in someone else’s narrative.

Subscribe to The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
4 Comments

2019 Art Market Predictions

January 27, 2019 Jason Bailey
President Barack Obama, Kehinde Wiley, 2018

President Barack Obama, Kehinde Wiley, 2018

Feel free to continue reading our 2019 predictions but please note that we have also recently published our 2020 art market predictions.

I’m a little late on my art market predictions this year, but I had too much fun with my 2018 art market predictions to keep my crystal ball in the closet. This year, I go deep on two trends that I think will dramatically transform the art market, not only in 2019, but for the next decade: first, increased diversity/inclusion in the art world, and second, the digital transformation of art.

I believe we are on a massive collision course between populations that are becoming increasingly diverse and an art history and art world that is still very white and very male.

Many have told me that nothing changes fast in the conservative art world. However, I am predicting nothing short of a “Moore’s Law” of diversity in art. I believe 2019 will bring double the protests and market shift towards equality from what we saw in 2018, and this doubling will continue annually until we reach visible signs of parity. I theorize that continued pressure on art museums will drive rapid cultural change which will then trickle down and transform the art market.

Equally radical, I believe rapidly evolving technology, specifically digitization, is shaping human lives faster and more dramatically than any other series of events in history. I predict that digital transformation of the art world will lead to the beginnings of the dematerialization of art (as is already happening with books and music). And I argue that rather than a rise in the commoditization of art, we are actually seeing the early beginnings of a move away from ownership by traditional definitions.

I predict that museums, galleries, and auction houses will realize improving diversity/inclusion and focusing on the rapidly shifting intersection of art + tech is the key formula for increasing interest, engagement, and participation in the arts.

The rest of this post dives into why I hold these two beliefs. I try to take a first-principles look at art and its function in society, including its use in museums and private collections. I then take a look at what I believe are two important macro trends — a strong push for diversity and inclusion, and the digital revolution — and make predictions around the impact of those trends on art, its value, and its function in society.

Museums Are Serving an Increasingly Diverse Population

Source: William H. Frey Analysis of the the U.S. Census population projections released March 13, 2018, and revised September 6, 2018

Source: William H. Frey Analysis of the the U.S. Census population projections released March 13, 2018, and revised September 6, 2018

According to the Brookings Institution, the U.S. population is projected to become “minority white” by 2045. Additionally, Europe’s population as a percentage of the global population has been shrinking, moving from 28% in 1913 to 12% in present day, and is predicted to be just 7% by 2050.

As minority groups increasingly move towards forming a collective majority in the U.S. and Europe, it becomes increasingly important for museums to evaluate their collections and hiring policies to make sure they reflect the public they serve. This includes not only diversity in race, but working towards correcting longstanding gender inequalities, as well. There are many signs that there is still a lot of work to be done on both fronts and increasing pressure to get it done faster.

  • 85% of artists in major U.S. museums are white

  • Work by women artists makes up only 3–5% of major permanent collections in the U.S. and Europe

  • Less than 3% of museum acquisitions over the past decade have been of work by African American artists

  • Among museum curators, conservators, educators, and leaders, 84% are white, 6% are Asian, 4% are African American, 3% are Latina/o, and 3% have a mixed-race background

  • 46% … of U.S. museum boards are all white

  • 93% of U.S. museum directors are white

  • The top three museums in the world — the British Museum (est. 1753), the Louvre (est. 1793), and The Metropolitan Museum of Art (est. 1870) — have never had female directors

Sadly, but not surprisingly, the art market reflects these same biases.

  • 80% of the artists in NYC’s top galleries are white (and nearly 20% are Yale grads)

  • 77.6% of artists in the U.S. making a living from their work are white

  • Only five women made the list of the top 100 artists by cumulative auction value between 2011-2016

  • The discount for women’s art at auction is 47.6%; even removing the handful of “superstar” artists that skew the data, the discount is still significant at 28%

  • There are no women in the top 0.03% of the auction market, where 41% of the profit is concentrated

  • Overall, 96.1% of artworks sold at auction are by male artists

Despite the art world being disproportionately white, we are seeing trends of increased engagement across all minority groups in attending U.S. museums and galleries between 2012 and 2017.

Source: National Endowment for the Arts, The 2017 Survey of Public Participation in the Arts

Source: National Endowment for the Arts, The 2017 Survey of Public Participation in the Arts

Rather than back away from them out of frustration, people who feel like museums are not representing the public they serve are increasingly taking the fight into the museums. Here are just a few of the protests that were held in museums in 2018 alone:

  • Brooklyn museum hiring a white woman as chief curator for its African collection

  • Artist Michelle Hartney put up alternate wall labels at the MET highlighting Picasso’s and Gauguin’s poor treatment of women

  • Protests of the MET changing their admission policy as classist and nativist

  • Demonstrators filling the Whitney to protest its vice chairman’s ties to a tear gas manufacturer

  • Artists in a protest art show asked to have their work removed from the exhibition when the museum rented out their atrium to a defense contractor

  • Protests at the British Museum over an exhibit sponsored by BP

  • Digital artists held a guerrilla AR (augmented reality) exhibit in the MoMA making a statement against elitism and exclusivity

  • Photographer Nan Goldin led the charge to shame the Sackler family for its role in getting people hooked on OxyContin by staging protests in the Sackler wings of several museums, leaving pill bottles and staging “die-ins”

Nan Goldin and P.A.I.N. (Prescription Addiction Intervention Now) protesting the Sackler involvement with the Harvard Art Museums

Nan Goldin and P.A.I.N. (Prescription Addiction Intervention Now) protesting the Sackler involvement with the Harvard Art Museums

If it’s not bad enough that the majority of work in art museums is by white males, much of the work that is not by white males was stolen during colonization. A recent report estimates that 90% of African art is outside of the continent.

Between the 1870s and early 1900s, Africa faced European colonization and aggression through military force which included mass looting of African art and cultural artifacts. This art was brought back for display in museums in European countries, as well as in the U.S. There has been increased pressure to return the stolen art back to Africa, and in 2018, we saw several protests on this front. The group Decolonize This Place took the protest to the Brooklyn Museum with signs that read, “How was this acquired? By whom? For whom? At whose cost?” and protestors at RISD demanded a sculpture looted from the Kingdom of Benin be returned.

Decolonize this Place activists protesting in the Brooklyn Museum

Decolonize this Place activists protesting in the Brooklyn Museum

French President Emmanuel Macron set a new precedent when he commissioned research on how to handle France’s ~90,000 artworks from Africa. The result was a 109-page report recommending that France give back to Africa all works in their collections that were taken “without consent” from former African colonies.

France, of course, was not alone in colonization. Hundreds of thousands of African artifacts are housed in the U.K., Germany, Belgium, and Austria. The British Museum alone has over 200,000 items in its African collection. I predict pressure to return these artifacts (in the cases where they were ill-gotten) will only increase. I don’t expect people will settle for the “long-term loans” of works back to Africa that many museums are proposing in lieu of complete repatriation.

When Museums Signal Inclusion and Diversity, Good Things Happen

Museums have a lot of work to do to increase diversity and inclusion, but good things happen when they do, even when it is just symbolically.

In early 2017, Beyonce and Jay-Z shot a video for their track Apeshit in the Louvre. Before you write this off as insignificant, you should know that it had an immediate and enormous impact, with Louvre officials crediting the video for increasing their attendance by 25% from 2017 to an all-time record of ten million visitors in 2018.

No doubt Beyonce and Jay-Z resonate strongly with a young and diverse audience (with over 100 million albums sold combined), and their video likely brought some fresh faces to the Louvre.

Similarly, many of the most-heralded art exhibitions of 2018 featured female artists, suggesting a strong appetite for some diversity in our museums and galleries. These include:

  • Hilma af Klint - Guggenheim

  • Tacita Dean - National Gallery, National Portrait Gallery, and Royal Academy

  • Adrian Piper - MoMA

  • Berthe Morisot - Barnes Foundation

  • Anni Albers - Tate Modern

  • Vija Celmins - SFMOMA

  • Tomma Abts - Serpentine Sackler Gallery

Museums that want to see growth in attendance should follow the example set by the Louvre and others by finding public ways to signal that they are open to both artists and visitors of all races and genders, even if they still have work to do in diversifying their collections and staff. Showing some self-awareness can go a long way while en route to solving the problems long term.

Continued Pressure on Art Museums Will Drive Rapid Cultural Change That Will Transform the Art Market

The relationship between museums (as culture drivers and tastemakers) and galleries and collectors is highly interdependent. We know from studies that artists see major boosts in the market for their work when they are shown in major museum exhibitions.

“Auctions of valuable pieces tend to coincide with successful exhibitions.” Ahmed Hosny, Machine Learning For Art Valuation: An Interview with Ahmed Hosny

“Auctions of valuable pieces tend to coincide with successful exhibitions.” Ahmed Hosny, Machine Learning For Art Valuation: An Interview with Ahmed Hosny

Given this dependency, I believe that once museums accelerate the diversification of the work they show (under pressure from an increasing number of protests), we will see the value of the art rise dramatically in the market.

We are already seeing some early signs of the market correcting for its indefensible biases. In 2019, Kerry James Marshall broke the record for top-selling work by a living African American artist when his piece Past Times sold for $21.1M at Sotheby’s last May.

Past Times, Kerry James Marshall, signed and dated '97

Past Times, Kerry James Marshall, signed and dated '97

Likewise, Jenny Saville set the record for most expensive work sold by a living female artist for her 1992 painting Propped, which sold for $12.4M.

Propped, Jenny Saville, 1992

Propped, Jenny Saville, 1992

I believe these two records falling in the same year is just a very small signal of a massive market correction that will happen over the next two decades as we mature as a society and learn to see people as equals, regardless of race or gender. Those who move quickly to increase diversity will flourish, and those who don’t will risk losing their audience and becoming irrelevant.

Digital Transformation and the Dematerialization of Art

Phantom 5, 2018, Jeff Bartell

Phantom 5, 2018, Jeff Bartell

"Art is an experience, not an object." - Robert Motherwell

The second major force that I believe will shape the art world in 2019 (and for the next decade to come) is a strong trend towards embracing art + technology, and specifically around the digital transformation of art.

Source: International Telecommunications Union

Source: International Telecommunications Union

We are living during arguably the most dramatic technological transformation in human history, and with half the world online now, I believe the future of art is inevitably digital. With music, we saw the evolution from physical media like cassettes and CDs move to dedicated hardware like iPods and MP3 players, and then finally to streaming services like Spotify and Pandora.

Source: IFPI Global Music Report 2018

Source: IFPI Global Music Report 2018

We saw the same trend in publishing, with physical books losing market share to e-books and e-readers. Those devices were just an intermediary step to streaming audiobooks, which is now the the fastest-growing sector of publishing by far.

Source: APA (Audio Publishers Association)

Source: APA (Audio Publishers Association)

Despite rapid shifts towards digitalization in other fields, most of us still think of canvas on a wall when we hear the word “art.” This is ironic given the fact that Americans spend an average of 11 hours a day looking at screens and almost no time looking at the walls of their homes.

Source: TEFAF Art Market Report Online Focus 2017

Source: TEFAF Art Market Report Online Focus 2017

Galleries are struggling or closing down precisely at a time when interest in art is rising on Instagram and at international art fairs. But increased interest does not always mean increased sales. Writer Tim Schneider captured this shift in his review of Art Basel Miami last year when he asked:

…if the fastest, and perhaps only, organically growing audience for art is more interested in being around it for a week, a few days, or even a night at a time rather than in owning it for a high price for much longer, what does that mean for everyone else?

I think we are seeing some early signs that art consumption is shifting away from physical ownership, as we saw with books and music, and toward the experiential, ushered in by the digital.

For centuries, physical ownership of art was required to enjoy it. Art was a sign of wealth and power, and collecting art was about saying “I own this art.”

LUIGI FIAMMINGO – Portrait of patron Lorenzo de’ Medici, called The Magnificent, c. 1550

LUIGI FIAMMINGO – Portrait of patron Lorenzo de’ Medici, called The Magnificent, c. 1550

With the increase in availability of the internet, we have seen a rise in social media consumption of art. Sharing selfies at museums and art fairs on social media signals your taste and sophistication without having to own physical artworks. And while a few dozen people may see the art you purchased at a gallery and hung on the walls of your home, hundreds to thousands of people instantly see the art selfies you share on your social profile. This has enabled art appreciation to be less about saying “I own this art” and more about saying “I like this art.”

Me at the Boston ICA hoping some of Albert Oehlan’s “coolness” will transfer to me in this selfie posted on Instagram

Me at the Boston ICA hoping some of Albert Oehlan’s “coolness” will transfer to me in this selfie posted on Instagram

I believe as we become increasingly digital, the new message we send going forward will be “I support this artist.” As with the previous stages of “owning” and “liking,” “supporting” publicly links you back to the art and artists who you enjoy in a highly visible way. And having methods for supporting artists that do not require you to purchase or commission whole works of art greatly expands the pool of potential participants.

Few of us show off our CD collections these days; instead, we consume music through streaming and go to concerts where we take selfies and buy t-shirts that we share on social media as patronage proof points. I expect art to move in that direction, and would argue that it already has.

Physical possession of works that are created digitally provides no real advantage. Again, it is the same dynamic of dematerialization we are seeing in music and books. I gain very little by having a physical CD for every album I have access to on Spotify or a physical book for every story I have access to on Audible. Neither would be practical. With streaming, what I have lost in fetishizing tangible objects, I have gained with access to a number of albums and books nobody could have dreamed of 25 years ago.

Does an art streaming service in the mode of Audible or Spotify sound ludicrous? Well, generation one of art streaming has been around for almost a decade and has over one billion users.

Source: Instagram

Source: Instagram

I’m talking about Instagram, of course. But Instagram is really just the Napster of art streaming, as it falls short of supporting most artists. Nevertheless, it is a solid proof point of our insatiable appetite for the digital consumption of art. I predict we will soon see a combination of the proven distribution and consumption model of Instagram paired with patronage models like Patreon and Kickstarter.

Last year, we saw a lot of experimentation around new models for funding artists from several promising startups exploring blockchain. Dada.nyc, where I am an advisor, has over 160K registered artists in their community. Over 100K drawings have been produced in their social media platform where artists communicate with each other through drawings.

I started this drawing when I was in bed with Lyme disease in my knee. Artists from around the world responded. The conversation continues months later.

The Dada team is carefully working through how to create a market that does not just duplicate the current physical art market. They want to avoid building a system where only a few can afford to collect and an even smaller number of people are rewarded for their creative work. Dada dislikes the collecting of art as speculation and is constantly evaluating new models of patronage that can enable artists to focus on creating their work. Their goal is for the entire community to benefit each time patrons provide monetary support and to blur the lines between “patrons” and “artists,” as they believe creating art is beneficial for everyone.

Another blockchain art market that experienced significant traction and growth in 2018 is SuperRare. They provide a new revenue stream for artists (which helps fund creative projects) while giving patrons the ability to discover, buy, sell, and collect unique digital creations by artists from around the world.

Screenshot of my digital art collection in SuperRare

Screenshot of my digital art collection in SuperRare

SuperRare completed almost 6,000 transactions, generating 602.76 ETH to date (over $70K) in less than a year since their launch.

https://www.dapp.com/dapp/SuperRare

https://www.dapp.com/dapp/SuperRare

Sure, these are not Instagram numbers just yet, but having a handful of startups (others notables include, Portion.io, Known Origin, R.A.R.E. Art Labs, and Digital Objects) prove out the model is an important first step in building out any new market. It is also telling that despite the cryptocurrency crash, which devastated the majority of blockchain companies, all of these blockchain art markets are still in business and experiencing growth.

There are two important things to note about digital art markets like the ones above:

  • You don’t need to own the work to experience it. When I buy a work on SuperRare, I see the same image that everyone else can see for free.

  • Because of this, the joy in collecting digital art does not derive from denying other people access to art, but instead, in increasing access to art and artists you enjoy and want others to appreciate, as well.

Digital art is highly replicable and transmissible, so there is no benefit to keeping it to yourself. In fact, the value of the work (as with all art) only goes up as you share it more broadly. The message with collecting in a digital age is less and less “I want you to know how powerful I am - I own this thing that nobody else can own” and is instead “I want you to know I support this artist because their work is awesome, and I’m excited to share it with as many people as possible.”

So why buy digital art if everyone else can see the same image for free? It’s simple: Because you can’t expect artists to continue creating if you don’t support them. I believe it’s not the art itself that we should revere, but the people making it. Too often we celebrate and fetishize individual works of art long after the geniuses that created them have died penniless. Rather than cater to speculative or extrinsic values, I predict we will see several new digital art streaming services built on the intrinsic pleasure we derive from art. We’ve learned we don’t need to own a physical, re-sellable book in order to enjoy a great novel, or a piece of vinyl to appreciate music. The same will be true for art.

It is important to remember that ownership and speculation on the part of collectors is not a necessary ingredient to producing great art. That is just one way of making sure artists have enough money to survive and continue working, and there is a good chance it is not the most effective (nor the best) model for artists.

Of course, many artists will choose to continue to work in traditional physical media like painting and sculpture. We will always have amazing museums full of physical artwork, and I couldn’t be more thankful for that. There will always be galleries for buying and selling physical art, same as we still have brick-and-mortar bookstores and music stores. But I think the digital transformation of art is inevitable and coming faster than most people expect. I also strongly believe that this shift is healthy and presents an opportunity to reframe (pun intended) how we treat artists and consume art.

Summary and Conclusion

Lots of people I know don’t like making or reading predictions. The primary complaint I hear is that predictions are either “boring and accurate” or “entertaining but outrageous.” I, on the other hand, love making predictions. I feel like the process is similar to making art in that I can use both my imagination and my powers of observation and reasoning to show the world how I see things as they could be.

As a pseudo-futurist and techno-optimist, I relish the idea that we can build a world where an increasing number of people can participate in the joys that art has given me in my life. I believe the macro forces and trends that I am seeing in the world support that idea.

But I don’t want to let myself off the hook too easily here, so what am I actually saying that is measurable here in terms of predictions?

  • First, a “Moore’s Law” in diversity in art. Inclusion and diversity in art will double annually until we reach parity, as measured by:

    • An increase in the price of works sold by women and minorities at auction;

    • An increase in the number of women and minorities in positions of power at museums and in the art trade;

  • Second, an increase in the number of people interested in art without a corresponding increase in the number of collectors;

  • Third, the launch of at least one art streaming service in 2019 and a shift towards this model over the next decade;

  • And fourth, a shift from artists, art journalism, and art fairs to diversity and tech as the key topics for all of 2019.

I hope you enjoyed this year’s predictions! Whether you agree or disagree, I am always excited to hear from Artnome readers. Leave your thoughts in the comments below or hit me up on Twitter at @artnome or e-mail me at jason@artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
3 Comments

How Rembrandt and Van Gogh Mastered The Art of the Selfie

January 13, 2019 Jason Bailey
An average of every self-portrait painted by Rembrandt

An average of every self-portrait painted by Rembrandt

I recently read that the average millennial will take an astronomical 25,000 selfies in their lifetime — almost one per day. This got me thinking about the history of selfies. Before the invention of the camera, artists were the only ones capable of making selfies (I know, what a tragedy, right?). So in a weird way, you could argue Rembrandt — known for painting an enormous number of self-portraits — was the Paris Hilton of his day. Sound crazy? It’s not.

Self-portrait in a hat with white feathers, Rembrandt, 1635

Self-portrait in a hat with white feathers, Rembrandt, 1635

Portrait in a hat with pink feathers, Paris Hilton, 2018

Portrait in a hat with pink feathers, Paris Hilton, 2018

Though the number is somewhat contentious, Rembrandt was known to have created close to 100 self-portraits (over 40 of them as paintings). That may not sound like much by today’s selfie standards, but it’s huge when compared to the painters of his day — especially when you consider that it accounts for 10% of his total artistic output. Think about it: he is a painter by trade, and 10% of his time on the job was spent making paintings of himself. If that’s not some Paris Hilton-level selfie action, then I don’t know what is.

The Old Masters Loved Showing Their Bling

Self-Portrait with a Sunflower, Anthony van Dyck, 1663 (note the flexing of the bling he received from his patron, the English monarch Charles I)

Self-Portrait with a Sunflower, Anthony van Dyck, 1663 (note the flexing of the bling he received from his patron, the English monarch Charles I)

Maybe you are thinking, “Jason, Rembrandt is the greatest painter of all time. Surely he was motivated by something more noble than the vanity that motivates today’s selfie-snapping celebrities.” Well, actually, not so much. Consider this: old master portrait artists were often given gold necklaces from their wealthy patrons. This became such a big deal that the painter Titian decided to bling out his selfies by including the gold chains he received from his patron, the Emperor Charles V. It kicked off a fad among portrait artists including Van Dyck, Vasari, Reubens, Bandinelli, and others.

Self-Portrait, Titian, 1546 (wearing the golden chain that was given to him by the Emperor Charles V in 1533)

Self-Portrait, Titian, 1546 (wearing the golden chain that was given to him by the Emperor Charles V in 1533)

Self-Portrait, Baccio Bandinelli, 1530 (wearing a gold chain with a pendant bearing the symbol of the chivalric Order of St. James)

Self-Portrait, Baccio Bandinelli, 1530 (wearing a gold chain with a pendant bearing the symbol of the chivalric Order of St. James)

Not unlike today’s hip hop artists, these chains were a status symbol — they showed that an artist had “arrived” and was at the top of their game. Unfortunately, Rembrandt had no wealthy patrons when he was first starting out. Undeterred, he decided to “fake it till he made it” and painted imaginary gold chains on his self-portraits to suggest that he had a higher status and more power than he actually did. Talk about doctoring a selfie for purposes of vanity and status.

Self-Portrait with Beret, Gold Chain, and Medal, Rembrandt, 1640

Self-Portrait with Beret, Gold Chain, and Medal, Rembrandt, 1640

Jay-Z: Portrait with Ball Cap, Gold Chains, and Brooch

Jay-Z: Portrait with Ball Cap, Gold Chains, and Brooch

The Dutch Love Their Selfies

Selfie with Van Gogh’s 1888 Self-Portrait Dedicated to Paul Gauguin (on a research visit to Harvard Art Museum)

Selfie with Van Gogh’s 1888 Self-Portrait Dedicated to Paul Gauguin (on a research visit to Harvard Art Museum)

“They say—and I am willing to believe it—that it is difficult to know yourself—but it isn’t easy to paint yourself, either.” - Vincent van Gogh

In terms of selfies, Van Gogh was not far behind Rembrandt, having painted 35 self-portraits in just one short decade of activity. That is roughly 4.2% of his total output and more than three self-portraits a year!

Van Gogh believed that painting could be reinvented through portraiture and fantasized about building a colony of artists working together. He also knew that Japanese wood block printers often exchanged prints among each other and encouraged his besties Gauguin and Bernard to exchange self-portraits with him.

As Van Gogh wrote:

It clearly proves that they [Japanese wood block printers] liked one another and stuck together, and that there was a certain harmony among them [. . .] The more we resemble them in that respect, the better it will be for us.

Van Gogh had essentially come up with an old-school social network where people could share and comment on each other’s selfies, not unlike Instagram or Snapchat. He wrote to his brother Theo sharing his thoughts on the self-portraits he received from Gauguin and Emile Bernard. Here is what Van Gogh’s comments would have looked like in Instagram (using actual quotes from correspondence between the artists).

VanGogh_Instagram.png

Sadly, Van Gogh and Gauguin’s friendship famously soured, and Gauguin sold the portrait Van Gogh painted for him for about three hundred francs after making a few restorations.

Van Gogh’s portraits function like a visual diary. While his early works do not feature gold chains, they are painted in a dark Rembradt-esque palette and feature conservative clothing and a pipe, suggesting Van Gogh may still have been at least a little preoccupied with keeping up appearances.

Self-Portrait with Dark Felt Hat, Vincent Van Gogh, Paris, 1886

Self-Portrait with Dark Felt Hat, Vincent Van Gogh, Paris, 1886

Self-Portrait with Pipe, Vincent Van Gogh, Paris, 1886

Self-Portrait with Pipe, Vincent Van Gogh, Paris, 1886

Self-Portrait, Vincent Van Gogh, Paris, 1886

Self-Portrait, Vincent Van Gogh, Paris, 1886

By 1887 (just one year later), we see Van Gogh rapidly exploring self-portraits in the style of other artists, including influences from Impressionism, Pointallism, and Japanese woodblock prints. I believe we are also seeing a shift from portraits focused on external appearance towards portraits capturing his own psychological inner life.

Self-Portrait, Vincent Van Gogh, Paris, 1887

Self-Portrait, Vincent Van Gogh, Paris, 1887

Self-Portrait with Bandaged Ear, Vincent van Gogh, Arles, 1889

Self-Portrait with Bandaged Ear, Vincent van Gogh, Arles, 1889

Self-Portrait, Vincent van Gogh, Saint-Rémy, 1889

Self-Portrait, Vincent van Gogh, Saint-Rémy, 1889

Averaging Rembrandt and Van Gogh Self-Portraits

It dawned on us that with so many selfies, Rembrandt’s and Van Gogh’s self-portraits comprised a pretty cool data set. After brainstorming with Artnome data scientist Kyle Waters, we decided it would be cool to create “average” self-portraits of each artist by combining their paintings into a single image. Kyle settled on using a similar approach to the technique he employed in averaging Van Gogh’s paintings to show why Van Gogh changed his color palette.

We started by importing all the self-portrait images for Rembrandt and Van Gogh from the Artnome database and set them all to be the same 400 x 400 pixels in dimension. I know, kind of a sin to change the aspect ratios of famous paintings, but this made it easier for us to "add" each image together, i.e., taking the red value of the top left pixel in Self-Portrait with Bandaged Ear and adding it with the red value from the top left pixel of Self-Portrait Dedicated to Paul Gauguin, and so on. 

We then calculated the simple arithmetic average by dividing out the sum of pixels by the total number of paintings.

unnamed-13.png

We were pretty psyched with the results. You can check them out below.

The average Rembrandt self-portrait

The average Rembrandt self-portrait

The average Van Gogh self-portrait

The average Van Gogh self-portrait

While there is a lack of detail, I actually love the results. You can definitely make out which is Rembrandt and which is Van Gogh. Rembrandt’s composite features an earthy brown color palette, while Van Gogh’s yellows and blues average a greenish hue, with patches of orangey-red where his hair and beard were most commonly depicted. It is also clear from the composites that Rembrandt preferred to paint himself looking to our right, whereas Van Gogh most often looks to our left.

We thought the effect was pretty cool, so tried it on a few other thematic subcategories, including Van Gogh’s portraits of Madame Ginoux and his sunflowers, respectively, creating an average-based image for each.

Visual average of every portrait of Madame Ginoux by Van Gogh

Visual average of every portrait of Madame Ginoux by Van Gogh

Visual average of the sunflower paintings by Van Gogh

Visual average of the sunflower paintings by Van Gogh

As with the self-portraits, these were visually interesting, as you can still make out some of the features and shapes without one clear painting dominating.

Conclusion

Next time someone gives you a hard time for spending 15 minutes fussing with filters on your selfie, remind them that Rembrandt spent a full 10% of his career perfecting selfies. Who knows? Maybe there is even a market for an “old masters” app that lets people add gold chains to their selfies.

As always, thanks for reading. As we mentioned, this was inspired by Kyle Waters’ excellent work in averaging Van Gogh’s paintings for our post about the shifts in his color palette. We have a third post in the series that will focus on establishing “the average” Van Gogh work using several different techniques. Sign up for our newsletter and we’ll be sure to alert you when it goes live.

If you have questions or suggestions you can always reach me on Twitter at @artnome or email me at jason@artnome.com.

Subscribe to the Artnome newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Painted Portraits Inspired By Neural Net Trained on Artist’s Facebook Photos

January 9, 2019 Jason Bailey
Crazy Eyes, Liam Ellul, 2018

Crazy Eyes, Liam Ellul, 2018

I’ve come to learn that if a person can run neural networks and has deep interest in art, they are probably a pretty creative and interesting person. Australian artist Liam Ellul is no exception.

Ellul recently shared a new portrait series he working on called Just Tell Me Who To Be on Twitter. His portraits are simultaneously of nobody in particular and yet also everybody in his life. The series explores identity through four 12”x12” acrylic-on-cotton paintings. Each painting was painted directly on printouts of images Ellul created by training a GAN (generative adversarial network) on 10k photographs from his Facebook account.

Without going into a full description of how GANs work (you can find that here), the process involves a neural network inventing new images based on a set of training images provided by the artist. So in Ellul’s case, he is essentially asking the GAN, “If I give you photos of all the important people and moments in my life, can you go and invent me some new people and moments?” After training, the GAN outputs a large number of potential images and stores them in “latent space,” which you can see in the animated GIF below.

Small snippet from the Interpolation video from Ellul’s GAN trained on his personal Facebook and Google photos.

Small snippet from the Interpolation video from Ellul’s GAN trained on his personal Facebook and Google photos.

Ellul then short listed a dozen of faces seen in the GIF above, printed them out, and laid them around his apartment for a few days. According to Ellul:

After extracting all the faces from my archives — the data preparation was somewhat manual — I found myself looking at thumbnails most of the time. Pre-processing in this context reminded me of mixing colors on a pallet, but instead of colors, I was mixing forms. It became pretty clear which ones had the strongest hold on me — then I painted them. The common thread was that the selected outputs I chose gave me an impression of something I identified with in a really deep way. Like, out of the latent space, it touched on something that I couldn’t have represented unless I saw it first.

Like Dropping Your Family Photo Album Into a Blender

Self Portrait (now alive), Liam Allul, 2018 - GIF alternating between the GAN printout and the finished painting

Self Portrait (now alive), Liam Allul, 2018 - GIF alternating between the GAN printout and the finished painting

Self Portrait (now alive), Liam Allul, 2018

Self Portrait (now alive), Liam Allul, 2018

Self Portrait (now alive), Liam Allul, 2018 - (reference image)

Self Portrait (now alive), Liam Allul, 2018 - (reference image)

Though some AI artists make their own training image sets — notably, Anna Riddler, with her painstaking photographic collection of tulips, and Helena Sarin, who trains GANs on her drawings and paintings — it is rare. For practical reasons (scale and availability), most AI artists select large public data sets to train GANs. However, because these public data sets are widely available, as are the GANs used to process them, there are signs that the results are becoming increasingly homogenous.

Ellul bucks this trend by not only using his own materials, but by using the most personal materials possible: photographs from his own life’s relationships, experiences, and memories, which are no doubt loaded with personal meaning and associations. He owns the material, in the truest sense of the word, as he has quite literally lived it. From Ellul:

It was a surprising realization just how much data I have created over my life and how effectively it can be harnessed in the creative process. Some look like me physically, but the face and expression I would never pull in a photo — it’s this surreal look that captures a feeling and encourages me to express it. Others look like a blend of me and a friend with similar surreal expressions.

Once I was happy with the outputs of the model, I spent a long while just watching the waves of eerily familiar faces that it produced. Often, I’d recognize a face as my own or fused with a close friend – despite never being captured with that expression – certain frames would perfectly resonate with a part of me when I saw them.

Number 3, Liam Ellul, 2018

Number 3, Liam Ellul, 2018

Number 3, Liam Ellul, 2018 (Source image from GAN)

Number 3, Liam Ellul, 2018 (Source image from GAN)

Fascinated by Ellul’s use of GANs as a departure point or inspiration for creating physical paintings, I asked him about his both his artistic and technical background.

Ellul shared that he has been creating portraits as a sort of visual journal since his grandfather first taught him to draw with charcoal when he was 10 years old (though he later switched to painting in acrylic). He initially went to school for law but realized “it wasn’t something I wanted to do professionally,” and he eventually shifted his focus to a rapidly growing interest in analytics. This led to Ellul and a friend launching “a small company focused on agricultural crop analysis and research.” It was there that Ellul learned about neural networks while testing predictive models for plant growth. Again from Ellul:

The first time I saw a GAN was 2017 in Alex Radford’s GitHub repo where he showed the generation of bedrooms, faces, and album art. My brain broke. Then mid-last year I saw the incredible high resolution faces you could get with GANs — something clicked in my brain and I felt compelled to do this portrait series.

Self Portrait (With My Friends), Liam Ellul, 2018

Self Portrait (With My Friends), Liam Ellul, 2018

Self Portrait (With My Friends), Liam Ellul, 2018 (Source image from GAN)

Self Portrait (With My Friends), Liam Ellul, 2018 (Source image from GAN)

Ellul now works in strategy and product development at Microsoft and creates his artwork on the side. I asked Ellul if he has any upcoming projects and if so what was next:

Yes! I love the adventurous nature of this area and the experience of running through a personal gauntlet to get these the paintings out! In terms of what’s next, I have two ideas bubbling away that are very much still coming together. Network design and exploring ways they can be linked together is something I will put more time into as I develop my approach. I am going also see if I can make the switch from acrylics to oils!

Conclusion

While the purist in me loves seeing work created digitally staying digital, I suspect we will increasingly see artworks executed in a variety of media as GANs come into their own as a tool for augmenting creativity (imagine what a GAN-inspired sculpture might look like). I think this is an interesting direction, and I’m encouraged by the exploration and work of artist/technologists like Ellul and his recent portraits.

As always, feel free to reach out to me at jason@artnome.com with any questions or suggestions. You can also hit me up on Twitter, my social media of choice, at @artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


Comment

DeepDream Creator Unveils Very First Images After Three Years

January 2, 2019 Jason Bailey
Cats, one of the first DeepDream images produced by its inventor, Alex Mordvintsev

Cats, one of the first DeepDream images produced by its inventor, Alex Mordvintsev

In May of 2015, Alex Mordvintsev’s algorithm for Google DeepDream was waaay ahead of its time. In fact, it was/is so radical that its “time” may still never come.

DeepDream produced a range of hallucinogenic imagery that would make Salvador Dali blush. And for a month or so, it infiltrated all of our social media channels, all of the major media outlets, and even became accessible to anyone who wanted to make their own DeepDream imagery via a variety of apps and APIs. With the click of a button, I turned a photo of my wife into a bizarre gremlin with architectural eyes and livestock elbows.

Image I made using a DeepDream app in August, just three months after it was invented by Alex Mordvintsev

Image I made using a DeepDream app in August, just three months after it was invented by Alex Mordvintsev

And then — “poof” — DeepDream just kind of disappeared. It is the nature of art created with algorithms that when the algorithms are shared with the public, the effect quickly hits a saturation point and becomes kitsch.

I personally think DeepDream deserves a longer shelf life, as well as a lot of the credit for our current fascination with machine learning and art. So when Art Me Association, a non-profit organization based in Switzerland, recently asked if I wanted to interview Alex Mordvintsev, developer behind the DeepDream algorithm, I said “yes” without hesitation.

And when Alex shared that he recently found the very first images DeepDream had ever produced and then told me that he had never shared them with anyone, I could hardly contain myself. I immediately asked if I could share them via Artnome. Well, to be honest, I first asked if I could buy them for the Artnome digital art collection (collector’s instincts), but it turns out Google owns them and has let Mordvintsev share them through a Creative Commons (CC) license. Something tells me that Google probably doesn’t need my money.

Father Cat, May 26, 2015, by Alexander Mordvintsev

Father Cat, May 26, 2015, by Alexander Mordvintsev

For me, Mordvintsev’s earliest images from May, 2015, are as important as any other image in the history of computer graphics and digital art. I think they belong in a museum alongside Georg Nee’s Schotter and the Newell Teapot.

Custard Apple, May 16, 2015, by Alexander Mordvintsev

Custard Apple, May 16, 2015, by Alexander Mordvintsev

Why do I hold so much reverence for the early DeepDream works? DeepDream is a tipping point where machines assisted in creating images that abstracted reality in ways that humans would not have arrived at on their own. A new way of seeing. And what could be more reflective of today’s internet-driven culture than a near-endless supply of snapshots from everyday life with a bunch of cat and dog heads sprouting out of them?

I believe DeepDream and AI art in general are an aesthetic breakthrough in the tradition of Georges Seurat’s Pointillism. And to be fair, describing Mordvintsev’s earliest DeepDream images as “just a bunch of cat and dog heads emerging from photos” is as about as reductive as calling A Sunday on La Grande Jatte “a bunch of dots.”

That Mordvintsev did not consider himself an artist at the time and saw these images as a byproduct of his research is not problematic for me. Suerat himself once shared: “Some say they see poetry in my paintings; I see only science.” Indeed, to fully appreciate Mordvintsev’s images, it is also best to understand the science.

I asked Mordvintsev about the origins of DeepDream:

The story behind how I invented DeepDream is true. I remember that night really well. I woke up from a nightmare and decided to try some experiment I had in mind for quite a while at 2:00 AM. That experiment was to try an make a network to add details to some real image to do image super resolution. It turns out it added some details, but not the ones I expected. I describe the process like this: neural networks are systems designed for classifying images. I’m trying to make it do things it is not designed for, like detect some traces of patterns that it is trained to recognize and then trying to amplify them to maximize the signal of the input image. It all started as research for me.

I asked Alex what it was like to see his algorithm spread so quickly to so many people. I thought he might have regretted it getting “used up” by so many others, but he was far less shallow than me in this respect and took a broad-minded view of his impact:

I should probably have been involved in talking about it at that moment, but I was more interested in going deeper with my research and wanted to gain a deeper understanding of how things were working. But I can’t say that after three years of research that I understand it. So maybe I was over-excited in research at the moment.

I think it is important that everyone can participate in it. The idea that Iana, my wife, tries to convey is that this process of developing artificial intelligence is quite important for all the people and everyone can participate in it. In science, it isn’t about finding the answer, it is more about asking the right question. And the right question can be brought up by anybody.

The way I impacted society [with DeepDream] is that a lot of people have told me that they got into machine learning and computer vision as a result of seeing DeepDream. Some people even sent me emails saying they decided to do their Ph.D.s based on DeepDream, and I felt very nice about that. Even the well-known artist Mario Klingemann mentioned that he was influenced by DeepDream in an interview.

Indeed, I reached out to artist Mario Klingemann to ask him the significance of DeepDream for him and other prominent AI artists. He had this to say:

The advent of DeepDream was an important moment for me. I still remember the image of this strange creature that was leaked on reddit before anyone even knew how it was made and knew that something very different was coming our way. When the DeepDream notebook and code was finally released by Google a few weeks later, it forced me to learn a lot of new things; most importantly, how to compile and set up Caffe (which was a very painful hurdle to climb over), and also to throw my prejudices against Python overboard.

After I had understood how DeepDream worked, I tried to find ways to break out of the PuppySlug territory. Training my own models was one of them. One model I trained on album covers which, among others, had a "skull" category. That one worked quite nicely with DeepDream since it had the tendency to turn any face into a deadhead. Another technique I found was "neural lobotomy," in which I selectively turned off the activations. This gave me some very interesting textures.

Where I had seen sharing the code to DeepDream as a mistake, as it quickly over-exposed the aesthetic, Mordvintsev saw a broad and positive impact on the world which would not have been possible without it being shared. Mordvintsev also took some issue with my implication that DeepDream was getting “old” or had been “used up.” It turns out that my opinion was more a reflection of my lack of technical abilities (beyond using the prepackaged apps) than a reflection of DeepDream’s limitations as a neural net. He politely corrected me, saying:

Maybe you played with this and assumed it got boring. But lately, I started with the same neural network, and I found a beautiful universe of patterns it can synthesize if you are more selective.

I was curious why so many of the images had dog faces. Alex explained to me that he was using a pretrained network called ImageNet, a standard benchmark for image classification that was established around 2010. ImageNet includes 120 categories of dog breeds to showcase “fine-grained classification.” Because ImageNet dedicates a lot of its capacity to dog breeds, it triggers a strong bias in the data. Alex points out that others have applied the same algorithm to MIT’s Places Image Database. Images from the MIT database tend to highlight architecture and landscapes rather than the dogs and birds favored in the ImageNet database.

I asked Mordvintsev if he now considers himself an artist.

Yes, yes, yes, I do! Well, actually, we are considering my wife and I as a duo. Recently, she wanted to make a pattern for textiles and wanted a pattern that tiled well, and I sat down and wrote a program that tiled. And most generative art is static images on screen or videos, and we are trying to get a bit beyond that to something physical. We recently got a 2.5D printer that makes images layer by layer. I enjoy that a lot. But our artistic research lays mostly in this direction: moving away from prints into new mediums. Recently, we had our first exhibition with Art Me at Art Fair Zurich and we had sponsorship from Google. We are interested in showing our art to the world and trying to explain it to a wide audience.

Alex and Iana Mordvintsev Prepping to show their latest work at Art Fair Zurich

Alex and Iana Mordvintsev Prepping to show their latest work at Art Fair Zurich

While I appreciated DeepDream from the beginning, I felt it became kitsch too quickly as a result of being shared so broadly. Speaking with Alex makes me second guess that. It’s now clear to me that Alex did the world a service by making his discovery so broadly available and that he still sees far more potential for the DeepDream neural net (and he would know). There are some critics who just don’t “get” AI art, but as Seurat said: “The inability of some critics to connect the dots doesn't make Pointillism pointless.”

Above: Alex Mordvintsev’s NIPS Creativity Art Submission

As always thanks for reading! If you have questions, suggestions, or ideas you can always reach me at jason@artnome.com. And if you haven’t already, I recommend you sign up for the Artnome newsletter to stay up to date with all the latest news.

Subscribe to The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

How Artnome Stumbled Into Writing About The Three Biggest Art Stories of 2018

December 31, 2018 Jason Bailey
Cover.jpg

Reading all the “2018 art year in review” articles over the last few days has really helped hammer home for me that the three most talked-about stories in art in 2018 were:

  • Blockchain and art

  • The Banksy shredding

  • The AI art sold at Christie’s

While I don’t think of Artnome as a traditional art news site, we had some of the earliest stories on all three topics. We also ended up among the top of Google’s search results for all three stories. How did this happen?

Well, it was mostly dumb luck. But I always enjoy and appreciate it when blog authors and entrepreneurs candidly share the stories and the data from behind the curtain. So this post will blend our 2018 year in review with a back stage look at Artnome, warts and all.

Part One: Blockchain, Art, and “Being There”

Someone really needs to remake the movie Being There because it is a great cultural touchstone, but most people are too young to get the reference these days. In this rags-to-riches movie, a simple-minded, sheltered gardener named Chance, who knows only about gardening and what he’s learned from daytime TV, is forced to leave his home and enter the great big world after his employer passes away. Through a series of hilarious twists, Chance the gardener becomes “Chauncey Gardner” after he is mistaken as an upper-class gentleman. The story culminates with Chance giving the president of the United States basic gardening advice, which the president reads into as sage wisdom on the nation’s economy.

In 2018, I felt like the Chauncey Gardner of the art world, only instead of learning all I know from television, I had the internet. It started at the very end of 2017 when I spent half a day researching and writing about blockchain and art. I published a short blog post called The Blockchain Art Market Is Here and went to bed thinking nobody would ever read it. The next day I searched “blockchain art” on Google, only to discover my article had somehow stumbled to the top of the search results (where it stayed for most of the year). Within days I was getting dozens of emails with detailed questions about art and blockchain.

Traffic for the my article The Blockchain Art Market is Here followed a similar trajectory to the Cryptomarkets… down and to the right

Traffic for the my article The Blockchain Art Market is Here followed a similar trajectory to the Cryptomarkets… down and to the right

Then came interview requests and invitations to speak on panels at conferences all around the world as an expert in blockchain and art. I had somehow become an accidental expert despite knowing very little. To fix this, I went to a lot of conferences and spoke with a lot of folks that were much smarter than I am. I wrote a few more articles, started a podcast, and I learned enough to be able to moderate two panels in London at Christie’s Art + Tech event. It helped that my panels were loaded with brilliant, dynamic, cutting-edge thinkers. All I really had to do was get out of their way.

Blockchain had exploded in the art world, and I had just written the right article at the right time. Whether you cared about blockchain or not, you were forced to express your opinion or risk being left out of the conversation.

Then the cryptocurrency market started to tank, and the only people left talking about blockchain and art either truly believed in it and were building really cool stuff or were late to the party and did not realize the hype train had left the station.

With no more requests for speaking engagements, I went back to doing what I enjoy most: writing about the crazy stuff at the intersection of art and tech that I find fascinating.

Part Two: AI Art Gets Awesome

Robbie Barrat, AI Generated Nude Portrait #1, 2018

Robbie Barrat, AI Generated Nude Portrait #1, 2018

Early in 2018, I became obsessed with the Twitter feed of @DrBeef_ , a hyper-creative teenage artist named Robbie Barrat from West Virginia. Back in April, we became friends after I interviewed him and purchased some of his AI Nudes. I was a huge fan, and Robbie was (and still is) really generous in helping me better understand how artists are using GANs (generative adversarial networks) to make really cool new art.

However, almost nobody read my interview with Robbie (AI Art Just Got Awesome) in the first week, so I didn’t really think much of it. Then two months after I initially published the article, I noticed a spike in the number of people reading the interview.

Traffic for the my interview with Robbie Barrat, AI Art Just Got Awesome

Traffic for the my interview with Robbie Barrat, AI Art Just Got Awesome

Two things had happened. First, a bunch of other media outlets had picked up on Robbie’s work. Second, Christie’s was heavily promoting that it was going to be the first to sell an AI artwork at auction. The work they were selling was by a French art collective called Obvious, whom I’d also been friendly with on Twitter.

Portrait of Edmond Belamy, 2018, Obvious

Portrait of Edmond Belamy, 2018, Obvious

Unfortunately, Obvious had made some poorly thought out public claims about the AI being responsible for making the art, implying no real human involvement. The media ate that up and ran like crazy with it, undercutting the brilliant work that many AI artists had been doing for years by further suggesting humans had no role in creating AI art. Additionally, Obvious had borrowed heavily from the work of Robbie Barrat and they did not do a great job of crediting him. This made them a pariah among the AI art community.

Still, I sympathized with Obvious. There was no way they could have predicted that they would end up on the world stage having their every word and action scrutinized. So when Hugo Caselles-Dupré, the tech lead from Obvious, confided in me that the media’s version of the story was out of control and he wanted to come clean with the real story to smooth things over with the AI art community, I obliged. The interview, initially published under the title The AI Art At Christie’s Is Not What You Think, went for well over an hour and was the first article where Obvious acknowledged that they borrowed heavily from Robbie Barrat.

Traffic from my interview with Obvious technical lead Hugo Caselles-Dupré titled The AI Art At Christies Is Not What You Think

Traffic from my interview with Obvious technical lead Hugo Caselles-Dupré titled The AI Art At Christies Is Not What You Think

The interview drew more attention when it was cited by many other outlets, including Verge, The Art Newspaper, Artsy.net, and Smithsonian.

In less than a year, I had stumbled from becoming an accidental expert in blockchain at the exact right time to becoming an accidental expert in AI art at the exact right time. When I had first started writing about AI art and GANs in April, I had no reason to believe anyone in the mainstream would care. Now I am headed to Bahrain this March to moderate two panels on AI and creativity with a bunch of the artists I really admire, including Robbie Barrat. Life is strange and unpredictable.

Part Three: Myth Busting Banksy

Banksy (3).jpg

The self-shredding Banksy was the perfect “art” story for the mainstream media. If you only know one or two living artists, chances are Banksy is one of them. And sadly, the most popular stories surrounding art typically focus on two areas: first, works that are either intentionally or accidentally destroyed; and second, works that sell for far more or far less than expected. So when the Banksy painting that was at auction at Sotheby’s went up in value by shredding itself during an auction, we had the perfect storm.

I felt a bit like an ambulance chaser writing about the Banksy shredding, but as a blogger interested in art and tech, it felt natural for me to write about the device and how I thought it worked (or didn’t work). This was a quick article - I polled my father and brothers (who are all engineers) on their thoughts and pumped an article out in an hour or two, and it became the most popular Artnome article of the year with over 43K page views.

Traffic from my article Myth Busting Banksy

Traffic from my article Myth Busting Banksy

You’ll notice this article did not have the SEO staying power of the others. Seemingly everyone weighed in on it for about a week and then forgot about it. In this case, the enormous spike in traffic was largely due to other better-known outlets picking up our story and linking back to it, most notably, Boing Boing and the AV Club (for which we are always grateful).

Part 4: To The Moon! …Maybe

Around early October, I started thinking I was on my way to 100K visitors a month, which felt mind blowing for a blog that averaged one post a month and mostly focused on data and art history (the above-mentioned stories notwithstanding). I fantasized about the traffic growth I might get if I wrote with more frequency. To find out, I stayed up later on weeknights and wrote on both weekend days instead of just one. Aaaaand… my traffic came crashing back to earth.

Pageviews by month across all of Artnome since the site started in June of 2017

Pageviews by month across all of Artnome since the site started in June of 2017

I had made two mistakes: A) I misread three good months of increasing traffic as a solid trend, and B) I assumed more content would automatically mean more traffic. This is a pretty bad mistake for a guy who has spent almost two decades in digital marketing for his day job. It’s much easier to see what happened if you look at it from a weekly or even daily view instead of monthly.

Pageviews by week across all of Artnome since the site started in June of 2017

Pageviews by week across all of Artnome since the site started in June of 2017

Pageviews by day across all of Artnome since the site started in June of 2017

Pageviews by day across all of Artnome since the site started in June of 2017

As becomes obvious on the daily pageviews chart, the bulk of my record-breaking months for traffic came from one or two days - not a sign of smooth and steady growth, just a few outliers. But I fell victim to seeing what I wanted to see rather than what was there.

Part 5: Onward… The Good News

So buried under the outliers, there is actually some really solid growth for Artnome in 2018. And it is built not on the huge number of people chasing stories about shredded Banksy paintings, but instead by really intelligent and creative people looking to learn more about art, tech, and data.

In fact, six of the top seven Artnome posts this year really had nothing to do with news at all. In particular, two articles I wrote on generative art, Why Love Generative Art and Generative Art Finds Its Prodigy, performed extremely well and continue to drive traffic.

Screen Shot 2018-12-31 at 2.46.52 PM.png

In many ways this is a relief. If the secret sauce to growing Artnome was to race against thousands of other news outlets to write a high volume of short articles about artworks getting damaged, I couldn’t compete (and wouldn’t want to).

Instead, I think there is a large audience of folks who want articles that go a bit deeper into tech and art, whether it is:

  • Using data to highlight new discoveries like our recent post on Van Gogh’s shift in color palette

  • Providing an in-depth look at the arms race for compute power in AI art

  • Drawing attention to the need for better data and analytics on art

  • Showing how forgery and misattribution flourish in the absence of good data

  • Highlighting innovators who are trying to make the world better for artists

  • Sharing stories about artists and art movements who don’t get nearly as much attention from the art world as they deserve

As for growing Artnome, I think I will listen to the sage wisdom that Chauncey Gardner gave to the president of the United states on growing the economy in the movie Being There.

Being_There.jpg

Thanks for reading. If you have thoughts or questions, you are always welcome to hit me up at jason@artnome.com. Here’s to wishing you and your family a happy and productive 2019!

Register for the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Artist Cryptograffiti Sets Auction Record For Least Expensive Art

December 28, 2018 Jason Bailey
Du5CSpCVYAAAnPf.png

“I’m excited about a future where micropayments are omnipresent. Artists paid by the view, writers by the poem, musicians by the listen.” - Cryptograffiti

In a recent auction designed to sell to the lowest bidder, artist “Cryptograffiti” sold his elegant work Black Swan, a collage made from a single dollar bill, for $0.000000037, making it the least expensive artwork ever sold at auction. To understand why, we recently spoke with the artist.

Like Banksy and Shepard Fairey, Cryptograffiti’s origin story begins with street art, only his has a uniquely Silicon-Valley twist. Around 2011, Cryptograffiti left a job at Apple to launch a startup inspired by a Myspace feature called “Top Eight.” His startup’s product allowed you to share your favorite photos in a tangible piece of hardware:

The “Top Eight” was really fascinating to me because this was back when social networks were really picking up steam. The psychology behind why that was such a popular feature was really interesting. I thought that if I could make a product that encapsulated that, then the product would also be popular… kind of a modern take on lockets. You can wear a photo, and then there was an app so you can also tell people which photo you were wearing and why.

It turned out that the person helping Cryptograffiti develop the app was really into Bitcoin and was interested in being paid in cryptocurrency. This got Cryptograffiti looking at Bitcoin and blockchain much closer.

It was pretty clear to him that blockchain and cryptocurrency would eventually make the old banking system obsolete, so he began exploring this idea of making art using materials from the dying banking system (old credit cards and paper money) to “help explain this new era that was coming” ushered in by currencies like Bitcoin.

Cryptograffiti then had an “a ha” moment in late 2012 when he learned about micropayments.

I started hearing about micropayments as being the future: essentially being able to pay for things in little bits that wouldn’t be possible otherwise because of the minimum fees that come with credit cards…

…I have artists in my family and I was aware of the trials and tribulations that they had in the traditional art world. So I started to think of different ways that crypto and micropayments could be used specifically for artists as new revenue channels, and that got me to thinking about doing street art with the QR codes attached, and if people liked the work, then they could send over some Bitcoin. There were a number of different things that made me want to go all in, and in 2013, my startup was only doing “okay,” and it just didn’t seem as fascinating as this new world that was laid out before me. So I just decided to really jump in. It was super risky, but I’m really glad that I did.

Seattle, example of Cryptograffiti’s street art made from credit cards and using QR codes to accept tips in Bitcoin

Seattle, example of Cryptograffiti’s street art made from credit cards and using QR codes to accept tips in Bitcoin

I asked Cryptograffiti to help me understand micropayments a little better, because while I love the idea in theory, I couldn’t understand how it would work if the transaction fees associated with making payments using cryptocurrencies would exceed the amount of money being spent with a micropayment.

A lot of it depends on how overloaded the system is. Back in 2012 and 2013, there was just not as much congestion going on. But if you look like a year ago at December, 2017, the fees were sky high for Bitcoin because there were a lot of transactions happening and miners could pick who they wanted to work with, and so settlement times were slower and the fees were higher. A lot of this was coming down to a scaling issue, but there are solutions in the works. That’s part of why I wanted to do something with the Lightning Network with my art, because there is so much talk about price in the mainstream media and really not much discussion outside the crypto circles about some of the solutions that people are working on.

Which brings us back to Cryptograffiti’s Black Swan setting the auction record for least expensive artwork sold at auction. The auction was designed to reward the lowest bidder (instead of the highest bidder) to draw attention to the increasing viability of micropayments now made possible by the Lightning Network. The Lightning Network speeds up Bitcoin transactions while reducing transaction costs. As Cryptograffiti describes it:

The “Black Swan” was a fun idea I had knowing that it would not be lucrative, to help spread awareness about the Lightning Network. For those that don’t know, the Lightning Network is a payment channel layer on top of Bitcoin to help alleviate some of these payment scaling issues. So essentially, you can open up a channel with someone, make payments with them, and it will get settled up with the blockchain later on when the channels are closed, and so it helps with the congestion. There are no fees and it’s very quick. It’s really groundbreaking stuff. If it works, then it is going to bring about this era of micropayments that I yearned for from the beginning. Doing things like paying for reading an article or paying by the song or by the view for an artwork, these are all interesting ideas that haven’t been able to happen yet because of the payments to middlemen like credit cards.

Cryptograffiti’s Black Swan shown next to a tiny potted plant.

Cryptograffiti’s Black Swan shown next to a tiny potted plant.

The Black Swan itself is clever and aesthetically appealing. It’s a tiny work measuring in at 1.44 in x 1.75 in (3.55 cm x 4.44 cm) and features Cryptograffiti’s signature style of collage using older forms of physical currency, in this case, a single dollar bill. But I find the larger performance to be the most engaging part of the work.

For example, the video Cryptograffiti created to promote the artwork reminds me of a bootstrapped version of the massive marketing campaigns put out by Sotheby’s and Christie’s to promote and elevate works they bring to auction.

The special protective case, the white glove treatment, and the soundtrack (Mozart’s Eine Kleine Nachtmusik) all create a hilarious parody of the exclusivity and seriousness with which we treat important artworks at auction, which in turn drives home the absurdity of selling Black Swan for as little as possible in an auction designed as a race to the bottom.

Beyond the brilliant marketing campaign, I see the auction itself as an essential part of the artwork. For me, Black Swan is a conceptual or performance art piece with the swan itself serving as just one part of the performance.

Cryptograffiti’s Black Swan selling at auction for $0.000000037

Cryptograffiti’s Black Swan selling at auction for $0.000000037

In a world where it is nearly statistically impossible for artists to be “discovered” and self-shredding Banksys and controversial AI art capture headlines, artists seeking a larger audience for their work could learn from Cryptograffiti. In many ways, Cryptograffiti’s savvy Black Swan marketing campaign is indistinguishable and inseparable from the artwork itself.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Comment

New Data Shows Why Van Gogh Changed His Color Palette

December 24, 2018 Jason Bailey
Vincent Van Gogh, Wheatfield With a Reaper, September, 1889

Vincent Van Gogh, Wheatfield With a Reaper, September, 1889

When most people think of Van Gogh, the first color that comes to mind is a warm, radiant, golden yellow. Yellow sunflowers, yellow fields of grain, even the yellow moon in Starry Night.

But Van Gogh’s paintings did not start out that way. As late as 1885, roughly halfway through the short decade Van Gogh spent painting, he was still in his Dutch period, painting works like The Potato Eaters, which feature dark, muddled, grays, browns, and greens.

Vincent Van Gogh, The Potato Eaters, 1885

Vincent Van Gogh, The Potato Eaters, 1885

Curious about this shift from dark to light, we decided to use data visualization techniques to better isolate the moment of transition to a bright yellow color palette.

As a first step, we sorted every Van Gogh painting by year and then calculated the simple arithmetic average by dividing out the sum of pixels by the total number of paintings. In layman’s terms, we created the “average” Van Gogh painting for each year he was active. We were hopeful that this could pick up some of the subtleties in his shifting color palette over time. There were too few works to get a solid average in the first two years, so we shortened the range to 1882-1890.

1882

1882

1885

1885

1883

1883

1886

1886

1884

1884

1887

1887

1888

1888

1889

1889

1890

1890

In looking at the series of images, there is an unmistakable shift towards a lighter yellow palette starting in 1888.

We are not first to notice this. There are two popular theories around why Van Gogh shifted his color palette.

  • Illness/medication leading to “yellow vision”

  • Influence from the French Impressionists while working in Paris

We will briefly look at both theories below and then offer up our own.

Did Van Gogh Suffer From “Yellow Vision”?

One popular theory behind the shift in Van Gogh’s color choices is that he might have suffered from xanthopsia, or “yellow vision.” Xanthopsia is a “color vision deficiency in which there is a predominance of yellow in vision due to a yellowing of the optical media of the eye.” When caused by glaucoma, this can also include halos and flickering, which many think explains why Van Gogh depicts light as radiating outward, as in The Night Cafe (1888) and The Starry Night (1889).

Vincent Van Gogh, The Night Café, 1888

Vincent Van Gogh, The Night Café, 1888

Vincent Van Gogh, The Starry Night, 1889

Vincent Van Gogh, The Starry Night, 1889

Others believe that Dr. Gachet, the physician who treated Van Gogh in his final months at Auvers-sur-Oise, may have treated Van Gogh’s seizure’s with digitalis extracted from the plant fox glove, which is also known to cause yellow-blue vision and halos as a side effect.

Vincent Van Gogh, Portrait of Dr. Gachet, 1890 (note the fox glove plant shown in the portrait)

Vincent Van Gogh, Portrait of Dr. Gachet, 1890 (note the fox glove plant shown in the portrait)

Another frequently cited reason for the shift in Van Gogh’s color palette was his move to Paris in 1886. It is generally assumed that he was inspired by the bold use of color by the French Impressionists.

We were not convinced by the medical reasoning behind the shift in Van Gogh’s color palette and we could not think of any French Impressionists that painted with colors nearly as bold as Van Gogh, so we decided to take a look at some other possibilities.

Did Van Gogh Use More Yellow Because He Moved to a Sunnier Climate?

Van Gogh was a restless soul and moved around quite a bit. He also spent a lot of time painting outdoors, especially in his later years. As someone who famously struggled with mood swings, we thought location, and more importantly, weather patterns may have impacted his use of color.

To test this, we created composite images averaging every painting Van Gogh created from each of the major locations he worked from and compared them to weather patterns from those regions. We think the results are quite remarkable.

The Hague

The Hague

The Hague

The Hague

Arles

Arles

Arles

Arles

Nuenen

Nuenen

Nuenen

Nuenen

SAINT-REMY

SAINT-REMY

SAINT-REMY

SAINT-REMY

Paris

Paris

Paris

Paris

Auvers-sur-Oise

Auvers-sur-Oise

Auvers-sur-Oise

Auvers-sur-Oise

Look at the spike in sunshine in Arles as compared to all previous locations! We feel pretty confident that it was the warm weather and bright colors of southern France that influenced Van Gogh’s shift towards bolder colors, not “yellow vision” or exposure to the French Impressionists as previously thought.

Not only did Van Gogh literally see the world bathed in yellow sun while in Arles and Saint Remy, he was also able to get outside more often as there were literally more sunny days. I don’t think it is unreasonable to assume exposure to the sun and outdoors may also have lifted his mood, causing him to brighten his palette in response, as well.

Charting the average painting by location also brought some interesting things related to timing to our attention. To make these timing issues even clearer, we created the chart below. Each bar is colored using the average color for paintings created in that specific region. The chart then shows the order in which Van Gogh lived in each region and the total number of paintings produced there.

VanGogh_Artnome_Chart.png

Note that from this chart we can very clearly see that Van Gogh’s palette shifted after, not during, his time spent with the French Impressionists. Let’s call that myth busted.

It is also clear from the chart that Van Gogh’s palette turned yellow years before he became a patient of Dr. Gachet at Auvers-sur-Oise. Second myth… also busted.

This leaves our theory of increased exposure to sunlight, which the data has shown to support. Of course correlation doesn’t necessarily imply causation, and we can’t say for sure that the weather caused Van Gogh’s palette to switch. But we feel it is a stronger hypothesis than the ones currently out there and Van Gogh certainly left us plenty of clues to support our belief that increased sun was his inspiration for using more yellow. Our favorite:

“How wonderful yellow is. It stands for the sun.” - Vincent Van Gogh

Vincent Van Gogh, Vase with 15 Sun Flowers. 1888

Vincent Van Gogh, Vase with 15 Sun Flowers. 1888

Conclusion

At Artnome, we are big believers that data and new analytical tools can and should be used to provide new context for important art and artists. In this case, we feel relatively confident that by using data visualization, we have ruled out the two most popular explanations for Van Gogh’s shift in color palette: first, illness/medication; and second, Impressionism. We have also used weather data and geo-data to propose a more reasonable theory behind the switch to a bright yellow palette: When Van Gogh moved to the south of France, he experienced a significant increase in sunny days. His world became dramatically brighter and more yellow; he simply painted what he saw (while adding his own artistic license).

This article would not be possible without the hard work and analysis of Artnome data scientist Kyle Waters. He has made an enormous contribution to Artnome in a short amount of time, and we are excited to continue working with him. In an upcoming article, he will go into more detail around how he calculated the average images for Van Gogh in detail.

Also, if you like this article, you may also enjoy our other nerdy art articles on:

  • Inventing the Future of Art Analytics

  • Quantifying Abstraction

  • Searching All 1800+ Of Munch’s Paintings With Machine Learning

As always, thanks for reading and for your support. Feedback is always welcome and you can contact us at jason@artnome.com



Register for the Artnome newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
14 Comments

Machine Learning Art: An Interview With Memo Akten

December 16, 2018 Renée Zachariou

Memo Akten, Learning to see: We are made of star dust (#2), 2017

“A deep neural network making predictions on live camera input, trying to make sense of what it sees, in context of what it’s seen before. It can see only what it already knows, just like us. Trained on images from the Hubble telescope. (not 'style transfer'!)”


“If we do not use technology to see things differently, we are wasting it.”
- Memo Akten

I met Memo Akten before he grabbed the train to London, where he is currently developing some exciting projects and pursuing a PhD in machine learning.

R: Memo, I first contacted you for an article I was writing on artificial intelligence and the art market (you can read it here). The timing was too tight, though, so I’m glad we’re meeting today to discuss your art practice more broadly! But let’s start with AI anyway: as an artist who has long been active in this field, I am curious to have your analysis on the field as it is now. Can you briefly explain who you are and what you do?

M: Broadly speaking, I work with emerging technologies as both a medium and a subject matter, looking at its impact on us as individuals, as an extension of our mind and body, and its impact on society, culture, tradition, ritual, etc.

Simple harmonic motion #12 for 16 percussionists . Live at RNCM

These days I’m mostly thinking about machines that learn, machines that think; perception, cognition, bias, prejudice, social and political polarization, etc. The current rise of big-data-driven, so-called ‘AI’ acts as a rather apt mechanism through which to reflect on all of this.

I generally try to avoid using the term ‘AI’ - unless I’m specifically referring to the academic field - as it’s very open to misinterpretation and unnecessarily egregious disagreement over terminology. Once, after a panel, I had a member of the audience approach me, and rather angrily explain to me that AlphaGo (DeepMind’s software which beat the world champion Go player) could not be considered ‘AI’ because it had no ‘sense of self,’ which is okay, I guess. But it’s also why instead I say these days I work with machine learning, a term that’s easier to define – a system which is able to improve its performance on a particular task as it gains experience. More specifically, I work with deep learning, a form of machine learning which is able to operate on vast amounts of ‘raw,’ high-dimensional data, to learn hierarchies of representations. I also think of it as the process of extracting meaningful information from big data. A more encompassing term which can refer to what we usually mean by ‘AI’ these days is ‘data-driven methods or systems,’ and specifically ‘big-data-driven methods or systems.’ 

R: So what you’re interested in is not the technology itself, but the effect on society? If, let’s say, pigeon catching was the latest tech revolution, would you be working on that instead? 

M: If it impacted our world in such a massive way as the current big-data-driven systems do, I probably would. For example, I’m also very interested in the blockchain, but I do not feel it is as urgent a topic. Maybe it will be in a few years… (especially with the energy consumption!).

R: AI-generated art surely feels like a hot topic right now with the recent market hype around the Obvious sale at Christie’s [an AI generated painting that fetched $432,000 in October, 2018]. What do you make of it? 

M: First, I’d like to set the context for this discussion by bringing to attention the fact that the art market is a place where, with the right branding, you can sell a pickled shark for $8 million. The art market is ultimately the purest expression of the free, open market. The price of an object is determined by how much somebody is willing to pay for it, which is not necessarily related to its cultural value.

I decided not to talk about this before the auction because I feel the negative press and pushback from other folks in the field created too much controversy and fueled the hype. Articles came out daily with opinions from experts, and I’m sure all of this hype inflated the price [the painting was initially estimated at $8-10 thousand].  

There’s a spectrum of approaches to the practicalities of making work in this field with generative deep neural networks:

  • Train on your own data with your own (or heavily modified) algorithms

  • Train on your own data with off-the-shelf (or lightly modified) algorithms (e.g. Anna Ridler, Helena Sarin)

  • Curate your own data and use your own (or heavily modified) algorithms (e.g. Mario Klingemann, Georgia Ward Dyer)

  • Curate your own data and use off-the-shelf (or lightly modified) algorithms

  • Use existing datasets and train with heavily modified algorithms

  • Use existing datasets and train with off-the-shelf (or lightly modified) algorithms (this is what Obvious has done)

  • Use pre-trained models and algorithms (e.g., most DeepDream work, the recent BigGAN, etc.)

Personally, I think it is possible to make interesting work around each of these poles (and I have tried every single one!). But as you get towards the end of the spectrum, you’ll need to work harder to give it a unique spin and make it your own. And I think a very valid approach is to conceptually frame the work in a unique way, even if using existing datasets, or even pre-trained models.

Robbie [Barrat], a young artist, was very upset that Obvious stole his code (which was open source with a fully permissive license at the time). It’s true that they used his code, especially to download the data. But it’s important to remember that the code which actually trains and generates the images is from [ML developer/researcher] Soumith Chintala, which Robbie had forked [copied] from. And the data is already online and open (in fact, I had also trained the exact same models on the exact same data, and I know others did, too). What actually shapes the output and defines what the resulting images look like is the data - which is already out there and available to download - and the algorithm - which, in this case, is a Generative Adversarial Network (GAN) implemented by Chintala. Anybody who puts that same data through that same algorithm (whether it’s Chintala’s code, or other implementations, even in other programming languages) will get the exact same (or incredibly similar) results.

I’ve seen some comments suggesting that the Obvious work was intentionally commenting on this issue of authorship, perhaps in a lineage of appropriation art, similar to Richard Prince’s Instagram Art, etc. But I don’t think that is the case, judging by Obvious’ interviews and press release. Instead, Obvious seems to be going down the ‘can a machine make art?’ angle, which is a very interesting question. Lady Ada Lovelace was already writing about this in 1843, and there have been countless debates, writings, musings, and works on this since then. So personally, I would look for a little bit more than just a random sample from a GAN as a contribution to that discussion. Like I mentioned, what somebody is willing to pay for an artifact is not necessarily related to its cultural value. If a student were to make this work, I would try to be very positive and encouraging, and say, “Great work on figuring out how to download the code and to get it to run. Now start exploring and see where you go.”

On a side note, I’m not a huge fan of the label ‘AI art,’ because I’m not a fan of the term ‘AI,’ but beyond that, because the term ‘AI art’ is somehow infused with the idea that only the art being made with these very recent algorithms is ‘AI art’, whatever that means. I definitely do not consider myself an ‘AI artist.’ If anything, I’m a computational artist, since computation is the common medium in all of my work. People make art by writing software, and have done for 60 or so years (I’m thinking John Whitney, Vera Molnar, etc), or even more specifically, Harold Cohen was making ‘AI art’ 50 years ago. In a tiny corner of the computational art world, Generative Adversarial Networks (GANs) are quite popular today, because they’re relatively easy to use, and for very little effort, produce interesting results. Ten to fifteen years ago I remember delaunay triangulation to be very popular, because again, for relatively little effort, you could produce very interesting and aesthetically pleasing results (and I’m guilty of this, too). And in the ‘80s and ‘90s, we saw computational artists using Genetic Algorithms (GA), e.g., William Latham, Stephen Todd, Karl Sims, Scott Draves, etc. (On a side note, GA is a subfield of AI. So technically they are all AI artists, too.) Computational art will continue, it will grow, the tool palette available to computational artists will expand. And it’s fantastic that new algorithms like GANs attract the attention of new artists and lure them in. But I will just avoid the term ‘AI art’ and call them computational artists or software artists or generative artists or algorithmic artists. 

R: That’s it for market sentiment, then. Let’s focus on your practice again. What projects are you currently working on?

M: There’s a few angles that I’m pursuing, all very research-oriented. First is a theme that I’ve been investigating for a while now, which is looking at how emerging technologies – in this case, deep learning – can augment our ability to creatively express ourselves, particularly in a realtime, interactive manner with continuous control - analogous to playing a musical instrument, like a piano. How can I create computational systems, now using deep learning, that give people meaningful control and enable them to feel like they are able to creatively and even emotionally express themselves?

From a more conceptual angle, I’m interested in using machines that learn as a way to reflect on how we make sense of the world. Artificial neural networks [systems of hardware and/or software very loosely inspired by, but really nothing like, the operation of neurons in biological brains] are incredibly biased and problematic. They’re complicated, but can be very predictable, as well. Just like us. I don’t mean artificial neural networks are like our brain. I mean I just like using them as a mirror to ourselves. We can only understand the world through the lens of everything that we’ve seen or heard or read before. We are constantly trying to make sense of everything that we experience based on our past experiences. We see things not as they are, but as we are. And that’s what I’m interested in exploring and exposing.  Some of my work tries to combine both of these (and other) themes. E.g., my Learning to See series tries to do this, as both being a system for realtime expression, a potential new form of filmmaking and digital puppetry, but also ultimately demonstrates this extreme bias. One who has only ever seen thousands of images of the ocean will see the ocean everywhere they look.

As a more distilled version of this perspective, in 2017 I made a Virtual Reality (VR) piece FIGHT!. It doesn’t use neural networks or anything like that, actually. It uses the technology of VR, but is about as opposite to VR as is possible, I think. In the headset, your eyes are presented with monocularly dissimilar (i.e., very different) images. Your brain is unable to integrate the images together to create a single cohesive 3D percept, so instead the two rival images fight for attention in your conscious awareness. In your mind’s eye, you will not see both images blended, but the two rival images flicker back and forth as they alternate in dominance. In your conscious experience, your mind will conjure up animated swipes and swirly transitions – which aren’t really there. And this experience is unique and different for everybody, as it depends on your physiology. Everybody is presented with the exact same images, but everybody “sees” something different in their mind. And it’s impossible for me to know or see or ‘empathize’ with what you see. And of course, this is actually always the case, not just in this VR experience, but in daily life, in everything that we experience. We just forget that and assume that everybody experiences the world in the same way we do.

While I’m interested in these themes from a perceptual point of view, the underlying motivation with these kinds of subjective experiences is to expose and investigate cognitive bias and polarization. I come from Turkey, which is currently torn in two over our current president. In the UK, where I’ve been living for 20 years, the Remain/Brexit campaign has also radically split society. There seems to be a trend where people in one camp attribute the other camp’s political views to them being ‘stupid.’ E.g. I’m very much for remaining in the EU, but it disturbs me when I see other ‘remainers’ believe that the only possible explanation that somebody might have to have voted to leave the EU is because they’re either stupid or racist (or both). I can’t see the world in such simple black-and-white terms. I’m sure many (or at least some) leavers have a line of reasoning which may be more intricate than just being ‘stupid’ or ‘racist,’ even if I don’t agree with it. And if we refuse to acknowledge that, we can’t have a discussion, we’ll never be able to reconcile our differences. We’ll be driven further apart, and ultimately things will only get worse.  

R: Can you tell a bit more about the PhD you’re currently doing at Goldsmith’s University? Is it purely technical?

M: My idea going into the PhD was very ambitious. I wanted to weave together art, neuroscience, physics, information theory, control theory, systems theory, perception, philosophy, anthropology, politics, religion, etc., but that turned out to be a bit ambitious, at least for a first PhD. Now it’s narrowed down to being more technical. And like I mentioned before, for the past few decades, I have been trying to create systems that enhance the human experience, particularly of creative expression. What I’m interested in are realtime, interactive, closed feedback loops with continuous control.

This is also how we sense the world. E.g., our eyes are constantly scanning, receiving signals, moving, receiving signals, moving. And the brain integrates all of that information, and that’s how we perceive and understand the world. This is also how we embody our tools and instruments, through action-perception loops. This is how we can embody something like a bicycle or a car, or from a creative self-expression point of view, it’s how we embody something like a piano: we hit a key, hear a note, feel it and respond to it. Eventually, we get to a stage where we don’t think about what we’re playing, we just feel it, it becomes an extension of the body, and the act of playing becomes an emotional act in itself. I don’t feel a tool like Photoshop has that level of immediacy or emotional engagement, once you click on the menu dropdown, etc…

I am looking to use deep learning in that context, to achieve meaningful, expressive continuous control. The way generative deep learning mostly works right now is, for example, you run training code on a big set of images, then you run the generation code, and it generates images. It’s like a black box where you can only press one button: ‘generate something.’ Of course, there are some levels of control you could have. You can control the training data you feed it, you can pick an image and tell the code to create similar images. And in recent years, there have been more ways of controlling the algorithm. But very few of these methods are immediate, realtime closed feedback loops with continuous control. This is both a computational challenge and a system design challenge, as current systems are simply not built with this in mind (though it is a growing field, so that’s very exciting).

R: We’ve talked a lot about machine learning, how about we flip that on its head: can machines teach us something? 

M: Yes, definitely! We can look at today through an anthropological timescale: what’s happening in 2018 is not disconnected from what happened 100 or 10,000 years ago. When Galileo took a lens and made a telescope to look at the stars, he literally allowed us to look at the world in a whole new light. We cannot be the same after that. Well, that would have worked better if the Church hadn’t stepped in. If we do not use technology to see things differently, we are wasting it.

Take word embeddings, for example [a set of techniques that maps words and phrases to vectors of real numbers]. There’s a well-known model trained on three billion words of Google News. The program does not know anything to begin with, it doesn’t know what a verb is, it has no idea of grammar, but it eventually creates semantic associations.  So it learned about gender, for example, and you can run mathematical operations on words like king – man + woman => queen. It’s learnt about the prejudices and biases encoded in three billion words of news, a reflection of society. Who knows what else is in that model. I wrote a few twitter bots to explore that space, actually. @wordofmath and @wordofmathbias

But even Google autocomplete is a really powerful way of looking at what our collective consciousness is thinking or feeling. I wrote a poem about this in 2014. It’s a collaboration with Google (the search engine, not people working at Google), the keeper of our collective consciousness. And actually it’s more a collection of prayers.

A very powerful project in this realm I really like is by Hayden Anyasi. He was disturbed by the way newspapers selected images to accompany news stories, so he created an installation that takes a picture of your face and then creates a news story about you, based on the data it was trained on: a large dataset of newspaper articles. So if you’re an attractive young white woman, the story generated might be about winning some contest or something. If you’re a young black man, the story is more likely to be about crime. Some people might think that this just reflects reality, but unfortunately, that expectation is exactly the problem, as there are situations where images have been selected to accompany stories not because they are related to the story, but simply because that’s what the expectation was. In Hayden’s own words: “A young man's face was used as the lead image in a story about a spate of crimes. Despite being cleared of any involvement, his picture was later used again anyway. Did his face meet the standard expectations of what a criminal should look like?” It’s easy to dismiss these things when you’re not affected, but when you see it like this, this kind of art punches you. 

R: Speaking of scary things, there’s a lot of anxiety around technology these days. Would you say you’re a techno-optimist?

M: I’m definitely not very optimistic.  I’m not worried about the singularity or the ‘intelligence explosion’ or robots taking over. To me, that seems more like a marketing trick that’s good for business, to sell books, and to get funding from people who are so rich that the only thing which scares them are so-called ‘existential risks’ which will affect all of humanity, even people as rich and powerful as themselves. On a related note though, autonomous weapons are indeed a major genuine concern, and algorithmic decision-making systems are already in use and proving to be hugely problematic. I do believe algorithms could have the potential to be less prejudiced and fairer than humans on average, but they have to be thoroughly regulated, open source, open data, and preferably developed by non-profit organizations who are doing it only because they believe they can develop fairer systems which will be beneficial to everybody. And by ‘they’ I am referring to not just computer scientists, but a diverse team of experts across many disciplines, backgrounds, and life experiences who collectively have a much greater chance of thinking about and foreseeing the wider impact of these systems once deployed. Closed source, closed data systems developed by for-profit companies which are not well regulated is an absolute recipe for disaster. 

But I worry more about “unknown unknowns” that can come out of nowhere and have a huge impact. Here’s a dystopia for you: what if, in the future, the link between genotype and phenotype [how a particular trait is coded in our DNA and expressed through environmental conditions] was mastered (it is something that is being heavily researched right now)? And imagine that combined with CRISPR (or its successor), there was a service which allowed you to boost your baby’s IQ to 300+. And imagine that this service was incredibly expensive, something which only a select few could afford. What kind of world would that be? I don’t necessarily believe that this exact scenario will happen, but I’m sure we will face similar situations.

On the other hand, if we are ever to cure Alzheimer’s or leukemia, it will undoubtedly be with the help of similar data-driven methods. Even the recent discovery of gravitational waves produced by colliding neutron stars is a massive undertaking in data analysis and extracting information (the detection of a tiny blip of signal) in a massive sea of background noise. Machine learning encompasses the act of extracting meaningful information from data, and so any breakthrough in machine learning will impact any field which is data-driven. And in this day and age, everything is data-driven: physics, chemistry, biology, genetics, neuroscience, psychology, economics, and even politics. So it’s impossible to predict the unknown unknowns. Who knows, maybe someday we’ll be able to photosynthesize!

But I do have a streak of optimism. However, what I'm optimistic about is not technology, but us, and a potential shift in values. If we look at the overall evolutionary arc of human morals going back thousands of years, it seems there is a trend towards expanding our circle of compassion to be more inclusive. We used to live in small tribes, and neighboring tribes would be at war. We've now expanded those tribes to the size of countries. This is still far from perfect, especially with the current rise of nationalism, but the overarching long-term trend is a positive one, if it carries on in the same direction (and that is a big open ‘if’). We've now legally recognized that half of the population - women - are the equal of men and deserve the same rights, whether it be for voting, working, healthcare, education, etc. It’s quite shocking that this has only happened so recently, in the last hundred years or so. And so the effects have unfortunately not yet fully permeated into our culture and day-to-day lives, but I think it's inevitable that it will happen. Likewise, we’ve abolished slavery, we legally recognize all humans to be equal. Again, unfortunately, this has happened shockingly recently, so we are absolutely nowhere near being at a level where the day-to-day practice is satisfactory. But again, hopefully, the overall long-term trend is moving in a desirable direction. And this last century has even seen massive efforts to include non-human animals in our circle of compassion, whether it be vegetarianism or veganism or animal rights, in general. 

So while I’m not overly optimistic, the only glimmer of hope that I am able to potentially see for the future is not via any particular technology saving us, but hopefully a gradual shift in values which will head towards prioritizing the well being of all living things, as opposed to just a select few at the massive expense of others. The big open question is, apart from whether this will happen or not, is how soon will this happen. And how much damage will we have inflicted before we realize what we've done.

R: Thanks a lot for our chat! To wrap up, do you have any reading recommendations to dig deeper into machine learning art?

M: A few years ago I collated a list of resources which I had used to get up to speed.

At the time, there weren’t many introductory or beginner-friendly materials. It was more academic books and full-on online university courses. But in the past few years, as deep learning became really popular, loads of new ‘beginner-friendly’ materials came online. So this is probably quite out of date, but for those willing to invest time, I’m sure a lot of this will help build a strong foundation.

But since collating that list, a fantastic resource that is now available is Gene Kogan’s Machine Learning for Artists. It’s full of amazingly useful, beginner-friendly info. And another resource which I have not personally used, but I’ve heard very good things about, is Fast.ai.

Comment

AI Artists Expose “Kinks” In Algorithmic Censorship

December 11, 2018 Jason Bailey
Tom White posing with four abstract NSFW artworks presented at the ARTificial visual arts exhibit

Tom White posing with four abstract NSFW artworks presented at the ARTificial visual arts exhibit

Tumblr recently announced that they no longer tolerate adult content on their sites. This is problematic for artists and art historians because social media and blogging platforms have proven to be lousy at discerning nudity in art from nudity in pornography. As a result, platforms like Facebook are famously flagging important cultural art and artifacts like the Venus of Willendorf as adult content and removing them from their sites (essentially erasing our history).

While it is problematic, I can see how Facebook’s system could accidentally see the Venus, an object actually intended to represent the nude human form, as potentially being adult content. However, Tumbler’s artificial intelligence is flagging all kinds of bizarre things as adult content. And it fails to capture many things that actually do contain nudity. So if you were worried that machines and AI would eventually outsmart us, steal our jobs, and then steal our boyfriends/girlfriends, fear not. The machines are just not that into us.

Mustard Dream, Tom White, 2018

Mustard Dream, Tom White, 2018

So what do AI-censoring algorithms find sexy? Well, AI’s definition of nudity is actually pretty hilarious. For example, AI artist Tom White cooked up this sexy number he calls Mustard Dream, which was immediately flagged as “adult content” by Tumbler’s AI censor.

Tom White’s Mustard Dream, flagged by Tumbler as “adult content”

Tom White’s Mustard Dream, flagged by Tumbler as “adult content”

Apparently Tom’s milkshake brings all the droids to the yard as Mustard Dream also also scored a near perfect score of 92.4% on AWS (Amazon Web Services) for “explicit nudity.”

Mustard Dream scores a 92.4% on AWS (Amazon Web Server) Explicit Nudity detector

Mustard Dream scores a 92.4% on AWS (Amazon Web Server) Explicit Nudity detector

It is no coincidence that White’s art is scoring so high on the various AI “adult content” filters. For several years, White has been learning to see things the way machines see them. His work combines the minimal elements required for an AI to read an object in a image. He has famously done this for objects below, which were recognized as a fan, a cello, and a tick (left to right).

Fan, Tom White, 2018

Fan, Tom White, 2018

Cello, Tom White, 2018

Cello, Tom White, 2018

Tick, Tom White, 2018

Tick, Tom White, 2018

But now White has moved on from mundane, everyday objects and become the Hugh Hefner of AI, cornering the market for saucy AI pinups. White wondered how his robo-porn would hold up to the more sensual works of modern human artistic masters. To satisfy his curiosity, he analyzed the 20,000 screen prints that the MoMA (Museum of Modern Art) has in their collection.

Mustard Dream, Tom White, 2018

Mustard Dream, Tom White, 2018

Tobacco Rose, Mel Ramos, 1965, published 1966 (image courtesy of MoMA)

Tobacco Rose, Mel Ramos, 1965, published 1966 (image courtesy of MoMA)

It turns out White’s Mustard Dream print would be content blocked by Google and Amazon before all 20,000 prints in the MoMA. Though, according to White, “Mel Ramos Tobacco Rose had a respectable second place.”

Pitch Dream, Tom White, 2018

Pitch Dream, Tom White, 2018

White shared with me that much like Mustard Dream, his Pitch Dream “also scores with high confidence values as ‘adult’ and ‘racy’ by Google SafeSearch and as ‘explicit nudity’ by Amazon and Yahoo.” I can see why, just look at those curves ;-)

Screen Shot 2018-12-09 at 8.42.22 AM.png

But how does White create and optimize these images to titillate the AI adult content algorithms? According to White:

Given an image, a neural network can assign it to a category such as fan, baseball, or ski mask. This machine learning task is known as classification. But to teach a neural network to classify images, it must first be trained using many example images. The perception abilities of the classifier are grounded in the data set of example images used to define a particular concept.

In this work, the only source of ground truth for any drawing is this unfiltered collection of training images.

Abstract representational prints are then constructed which are able to elicit strong classifier responses in neural networks. From the point of view of trained neural network classifiers, images of these ink-on-paper prints strongly trigger the abstract concepts within the constraints of a given drawing system. This process developed is called perception engines as it uses the perception ability of trained neural networks to guide its construction process. When successful, the technique is found to generalize broadly across neural network architectures. It is also interesting to consider when these outputs do (or don’t) appear meaningful to humans. Ultimately, the collection of input training images are transformed with no human intervention into an abstract visual representation of the category represented.

In this case, the training images used were adult in nature but have become abstracted out by Tom’s perception engine in a way that makes the resulting image appear “adult” to a neural network but innocuous to a human.

Not all of the AI art being created to intentionally trigger “adult content” detectors is so wholesome. If Tom White is the Hugh Hefner of AI generated “adult content,” Mario Klingemann may just be the Larry Flynt. Where White’s images are humorous, Klingemann’s are often dark, disturbing, and unsettling.

tumblr_pjf6lnGjpq1y24yujo1_540.jpg
tumblr_pjf9veJMgM1y24yujo1_540.jpg
tumblr_pjf4icKqM21y24yujo1_540.jpg
tumblr_pjf58kv71K1y24yujo1_540.jpg

For this series of images Klingemann is calling “eroGANous,” he intentionally evolved a generative adversarial network called “BigGAN” for “maximum NSFW-ness.” Klingemann points out, “Tumblr's filter is not happy about them, but it looks like they may still show for a few days.” The complete series can be seen here, for now.

Klingemann sees the use of AI to broadly censor content as problematic as it results in “sterile” content. As he shared with me:

When it comes to freedom, my choice will always be "freedom to" and not "freedom from," and as such I strongly oppose any kind of censorship. Unfortunately in these times, the "freedom from" proponents are gaining more and more influence in making this world a sterile, "morally clean" place in which happy consumers will not be offended by anything anymore. What a boring future to look forward to.

Luckily, the current automated censorship engines are more and more employing AI techniques to filter content. It is lucky because the same classifiers that are used to detect certain types of material can also be used to obfuscate that material in an adversarial way so that whilst humans will not see anything different, the image will not trigger those features anymore that the machine is looking for. This will of course start an arms race where the censors will have to retrain their models and harden them against these attacks and the freedom of expression forces will have to improve their obfuscation methods in return.

Klingemann also shared several other projects by artists exploring machine learning and nudity including Artist Jake Elwes’ NSFW machine learning porn, “a 12-minute looped film that records the AI’s pornographic fantasies.” On his website Elwes describes his project as:

A convolutional neural network, an AI, was trained using Yahoo’s explicit content model for identifying pornography, which learnt by being fed a database of thousands of graphic images. The neural network was then re-engineered to generate pornography from scratch.

Both Klingemann and Elwes cite Gabriel Goh’s Image Synthesis from Yahoo's open_nsfw (heads up, it’s also NSFW) as an early example exploring neural networks and nudity.

And then there are the slightly less pornographic nudes from AI artist Robbie Barrat, which where trained on hundreds of classical nude portraits from art history.

AI Generated Nude Portrait #1, Robbie Barrat, 2018

AI Generated Nude Portrait #1, Robbie Barrat, 2018

We have covered Barrat’s Nudes extensively in the past on Artnome and are proud and honored to have several of them in our digital art collection, though I would be curious to see how Barrat’s Nudes rate on the various AI-driven “adult content” scales, as well.

AI Generated Nude Portrait #3, Robbie Barrat, 2018

AI Generated Nude Portrait #3, Robbie Barrat, 2018

Of course, censorship is nothing new for artists. Marcel Duchamp famously explored machine-like nudity with his Nude Descending a Staircase in 1912. The hanging committee of the Salon des Indépendants exhibition in Paris, which included Duchamp’s own two brothers, declined the work stating “A nude never descends the stairs--a nude reclines." Some of the wispy line work in Duchamp’s nude even resemble a bit the line work from Tom White’s Mustard Dream, perhaps just one more way Duchamp was ahead of his time.

Nude Descending a Staircase, Marcel Duchamp, 1912

Nude Descending a Staircase, Marcel Duchamp, 1912

Even Michelangelo’s Sistine Chapel drew criticism from the Pope and Catholic Church for the dozens of nude men it depicted. It was eventually censored, with loin cloths painted onto the figures to protect the prude and the modest.

Sistine Chapel, Michelangelo, 1508

Sistine Chapel, Michelangelo, 1508

One has to wonder how long it will be before an artist like Tom White is asked to add loin cloths to works like Mustard Dream to protect the purity of thought and modesty of artificially intelligent machines. I’m not even sure what the equivalent to a loin cloth looks like to a neural network, but I’m certain that Tom White and Mario Klingemann will figure it out and can find a way around such censorship.

Comment

MIT Replicates Paintings With 3D Printing and Deep Learning

November 29, 2018 Jason Bailey
MIT-RePaint-Flowers-03 - RePaint can reproduce paintings regardless of different lighting conditions (credit MIT CSAIL).png

I grew up in the ‘80s/’90s in a small ranch in a suburb of Boston with several nicely framed high-end reproductions of paintings from the Boston Museum of Fine Arts in my living room. Sure they were just replicas, but I loved them as they were a great way to bring some of the magic of my favorite museum into our home where we could live with them in the context of our everyday lives. They even had a texture to them to suggest brush strokes. But let’s face it - it was clearly not the same as seeing the actual paintings in the museum.

Fast forward 30 years and just down the street from the Boston MFA in Cambridge, a group researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has just released a paper on a new approach for replicating paintings that they claim is 4x more accurate than existing models at recreating exact color shades.

L8LD8j.gif

Traditional printing that most of us are familiar with uses just four inks: cyan, magenta, yellow, and black (also known as CMYK). The team from MIT/CSAIL are using ten inks layered in a stack to achieve more accurate results. To do this, the team developed a deep learning model to identify the ideal mix of colors for the stack.

They also take advantage of the older technique of halftoning (employing spatial modulation - think dot patterns in Roy Lichtenstein paintings) and contoning (combining thin layers of inks) which improved accuracy and provides a smooth appearance.

rRJ5RW.gif

One of the major advantages to the new technique is that the replica reacts to different lighting situations in a similar manner to the original painting. Previous techniques relied on “colorimetric color reproduction“ which would sample colors from a painting under a single lighting condition. But as the paper points out, this can lead to “metamerism, a well-known problem in color reproduction wherein a good reproduction is obtained under one light source, but not under another.”

According to Changil Kim, co-author of the paper and a postdoctoral fellow at MIT CSAIL.

If you just reproduce the color of a painting as it looks in the gallery, it might look different in your home. Our system works under any lighting condition, and shows a far greater color reproduction capability…

The team plans to make a data set publicly available which contains:

20,878 contone ink stack spectra and layouts, spectrally captured oil paintings, together with their optimized layouts using our ink library, and photographs of our printed reproductions under multiple illuminations.

The team at CSAIL believes the system could be used to protect originals from wear-and-tear in museums while also making it possible for people to view replicated versions in their own homes or multiple museums around the globe. According to mechanical engineer Mike Foshey:

The value of fine art has rapidly increased in recent years, so there's an increased tendency for it to be locked up in warehouses away from the public eye. We're building the technology to reverse this trend, and to create inexpensive and accurate reproductions that can be enjoyed by all.

As someone who thinks a lot about art and tech, I think it is good that we are advancing our technology for replicating works. But the romantic in me still wants to believe that there is something magical about the canvas that has been worked by the hand of the artist that will never be replicated.

Left : A photograph of one of the team’s printed sample patches. Each color square is 1 mm × 1 mm. Right: Their spectral acquisition setup.

Left : A photograph of one of the team’s printed sample patches. Each color square is 1 mm × 1 mm. Right: Their spectral acquisition setup.

While the early results are very promising, the new approach is still in its early development and has some significant limitations. At the moment, the time-intensive nature of 3D printing means that the reproductions are much smaller than the originals and it struggles with certain pigments like cobalt blue.

In the future they plan to expand this library, as well as create a painting-specific algorithm for selecting inks. They also can hope to achieve better detail to account for aspects like surface texture and reflection, so that they can achieve specific effects such as glossy and matte finishes. Given the current limitations we likely won’t need to worry about it being used for forgeries right now, but it is easy to imagine it becoming problematic in the future.

1 Comment

Helena Sarin: Why Bigger Isn’t Always Better With GANs And AI Art

November 26, 2018 Jason Bailey
Am I Dali yet?, Helena Sarin, 2018 (Collection of Jeremy Howard)

Am I Dali yet?, Helena Sarin, 2018 (Collection of Jeremy Howard)

AI art using GANs (generative adversarial networks) is new enough that the art world does not understand it well enough to evaluate it. We saw this unfold last month when the French artists’ collective Obvious stumbled into selling their very first AI artwork for $450K at Christie’s.

Many in the AI art community took issue with Christie’s selecting Obvious because they felt there are so many other artists who have been working far longer in the medium and who are more technically and artistically accomplished, artists who have given back to the community and helped to expand the genre. Artists like Helena Sarin.

Sarin was born in Moscow and went to college for computer science at Moscow Civil Engineering University. She lived in Israel for several years and then settled in the US. While she has always worked in tech, she has moonlighted in the applied arts like fashion and food styling. She has played with marrying her interests in programming and art in the past, even taking a Processing class with Casey Reas, Processing felt a little too much like her day job as a developer. Then two years ago, she landed a gig with a transportation company doing deep learning for object recognition. She used CycleGAN to generate synthetic data sets for her client. Then a light went off and she decided to train CycleGAN with her own photography and artwork.

This is actually a pretty important distinction in AI art made with GANs. With AI art, we often see artists using similar code (CycleGAN, SNGAN, Pix2Pix etc.) and training with similar data sets scraped from the web. This leads to homogeneity and threatens to make AI art a short-lived genre that quickly becomes repetitive and kitsch. But it doesn’t have to be this way. According to Sarin, there are essentially two ways to protect against this if you are an AI artist exploring GANs.

First, you can race to use the latest technology before others have access to it. This is happening right now with BigGANs. BigGANs produce higher-resolution work, but are too expensive for artists to train using their own images. As a result, much of the BigGAN imagery looks the same regardless of who is creating it. Artists following the path of chasing the latest technology must race to make their stamp before the BigGAN aesthetic is “used up” and a “BiggerGAN” comes along.

Chasing new technology as the way to differentiate your art rewards speed, money, and computing power over creativity. While I find new technology exciting for art, I feel that the use of tech in and of itself never makes an artwork “good” or “bad.” Both Sarin and I share the opinion that the tech cannot be the only interesting aspect of an artwork for it be successful and have staying power.  

The second way artists can protect against homogeneity in AI art is to ignore the computational arms race and focus more on training models using your own hand-crafted data sets. By training GANs on your own artwork, you can be assured that nobody else will come up with the exact same outputs. This later approach is the one taken by Sarin.

Sarin approaches GANs more as an experienced artist would approach any new medium: through lots and lots of experimentation and careful observation. Much of Sarin’s work is modeled on food, flowers, vases, bottles, and other “bricolage,” as she calls it. Working from still lifes is a time-honored approach for artists exploring the potential of new tools and ideas.

Trick or Treat, Helena Sarin, 2018

Trick or Treat, Helena Sarin, 2018

The Pigeon Pea, Pablo Picasso, 1912

The Pigeon Pea, Pablo Picasso, 1912

Radical Seasonality, Helena Sarin, 2018

Radical Seasonality, Helena Sarin, 2018

Sarin’s still lifes remind me of the early Cubist collage works by Pablo Picasso and Georges Braque. The connection makes sense to me given that GANs function a bit like an early Cubist, fracturing images and recombining elements through “algorithms” to form a completely new perspective.  As with Analytic Cubism, Sarin’s work features a limited color pallet and a flat and shallow picture plane. We can even see the use of lettering in Sarin’s work that looks and feels like the lettering from the newsprint used in the early Cubist collages.

I was not surprised to learn that Sarin is a student of art history. In addition to Cubism, I see Sarin’s work as pulling from the aesthetic of the German Expressionists. Similar to the woodblock prints of artists like Emil Nolde and Erich Heckel, Sarin’s work has bold, flat patterns and graphic use of black. She also incorporates the textures resulting from the process as a feature rather than hiding them, another signature trait of the Expressionist woodblock printmakers.

And Soon I'll Hear Old Winter's Song, Helena Sarin, 2018

And Soon I'll Hear Old Winter's Song, Helena Sarin, 2018

MÜDE, Erich Heckle, 1913

MÜDE, Erich Heckle, 1913

Tingel-Tangel II, Emil Nolde, 1907

Tingel-Tangel II, Emil Nolde, 1907

Woman, Erich Heckel, 1925

Woman, Erich Heckel, 1925

A Little Etching, With Apology to Modigliani, Helena Sarin, 2018

A Little Etching, With Apology to Modigliani, Helena Sarin, 2018

The Snow Queen, Helena Sarin, 2018

The Snow Queen, Helena Sarin, 2018

I think printmaking is a much better analogy to GANs than the oft-used photography analogy. As with printmaking, technology for GANs improves over time. Moving from woodblock to etching to lithography, each step in printmaking represents a step towards more detailed and realistic-looking imagery. Similarly, GANs are evolving towards more detailed and photorealistic outputs, only with GANs, this transition is happening so fast that it can feel like tools become irrelevant every few months. This is particularly true of the arrival of BigGANs, which require too much computing power for independent artists to train it with their own data. Instead, they work from a pre-trained model. This computational arms race has many in the AI art community wondering what Google research scientist David Ha recently put into words on Twitter:

Screen Shot 2018-11-22 at 12.46.22 PM.png

Sarin collected her thoughts on this in the paper #neuralBricolage, which she has been kind enough to let us share in full below.

Will AI art be a never-ending computational arms race that favors those with the most resources and computing power? Or is there room for modern-day Emil Noldses and Erik Heckels who found innovation and creativity in the humble woodblock, long after “superior” printmaking technologies had come along?

Helena Sarin is an important artist who is just starting to get the recognition she deserves. Her thoughts here form the basis for some of the key arguments about generative art (especially GAN art) moving forward.

#neuralBricolage: An Independent Artist’s Guide to AI Artwork That Doesn’t Require a Fortune

Candy store, Helena Sarin, 2018

Candy store, Helena Sarin, 2018

tl;dr With recent advent of BigGAN and similar generative models trained on millions of images and on hundreds of TPUs (tensor processing units), the independent artists who have been using neural networks as part of the artistic process might feel disheartened by the limitation of compute and data resources they have at their disposal. In this paper I argue that this constraint, inherent in staying independent, might in fact boost artistic creativity and inspire the artist to produce novel and engaging work. The created work is unified by the theme of #neuralBricolage - shaping the interesting and human out of the dump heap of latent space.

Hardly a day passes without the technical community learning about new advances in the domain of generative image modeling. Artists like myself who have been using GANs (generative adversarial networks) for art creation often feel that their work might become irrelevant, since autonomous machine art is looming and generative models trained on all art history will soon be able to produce imagery in every style and with high resolution.  So those of us who got fascinated by creative potential of GANs but frustrated by the output of low resolution, what options do we have?

Not that many, it seems; you could join the race, building up your local or cloud compute setup, or start chasing the discounts and promotions of ubiquitous cloud providers utilizing their pre-trained models and data sets - the former prohibitively expensive, the latter good for learning but too limiting for producing unique artwork. The third option would be to use these constraints to your benefit.

Here I share the aesthetics I’m after and the techniques I’ve been developing for generating images directly from GANs, within the constraints of only having small compute and not scraping huge data sets.

Look at it as an inspirational guide rather than a step-by-step manual. 

Setup

In any ML art practice, the artist needs the GPU server, ML software framework, and data sets. I consider my hardware/software setup to be quite typical - I’m training all my GANs on a local server equipped with a single GTX 1080TI GPU. Compute resource constraints mean that you can only use specific models  - in my case it’s CycleGAN and SNGAN_projection, since both can be tuned to do a training from scratch on a single GPU. With SNGAN I can generate images with resolution up to 256x256, further upscaling them with CycleGAN.

Data sets

From the very beginning of my work with GANs I’ve been committed to using my own data sets, composed of my own drawings, paintings, and photography. As Anna Ridler, the ML artist who also works exclusively with her own imagery, rightly suggested in her recent talk at ECCV: “Everyone is working with the same data sets and this narrows the aesthetics.” I covered my approach for data sets collection and organization in my recent blog “Playing a Game of GANstruction”

Process

The implications of BigGAN-type models are widely discussed in the machine art community. Gene Kogan recently suggested that “like painting after the advent of the camera, neural art may move towards abstraction as generative models become photorealistic.” And at least in the short term, the move towards abstraction is in a sense inevitable for those of us working under resource constraints, as training on modestly sized data sets and a single GPU would make the model collapse long before your model is able to generate realistic images. You would also need to deal with the low resolution of the GAN when training/generating images with constrained resources. Not to despair - GAN chaining and collaging to the rescue! Collage is a time-honored artistic technique - from Picasso to Rauschenberg to Frank Stella, there are many examples to draw from for GAN art.

My workflow for GAN output generation and post-processing usually follow these steps where each one might yield interesting imagery:

Step 1: Prepare data sets and train SNGAN_projection. The reason I’m using SNGAN is that projection discriminator allows you to train on and generate several classes of images, for example, flower painting and still life. An interesting consequence of working with images that don’t have obvious landmarks or homogeneous textures as in ImageNet is that it causes glitches in the models expecting ImageNet-type pictures. These glitches cause class cross-contamination and might bring interesting pleasing effects (or might not - debugging the data sets is quickly becoming a required skill for an ML artist). As a result, the data set’s composition/breakdown is the most important factor in the whole process.

The model is then trained till the full collapse. I store and monitor the generated samples per predefined timeout, stopping the training and decreasing the timeout when I start observing the interesting images. This might also prove to be quite frustrating, as I noticed the universal law of GANs is that the model always produces the most striking images in iterations between the checkpoints, whatever the value the saving interval is set to - you’ve been warned.

Step 2: Generate images and select a couple hundred of those with some potential. I also generate a bunch of mosaics from these images using Python scripts. This piece from the Shelfie series or Latent Scarf are some examples.

Shelfie Series, Helena Sarin, 2018

Shelfie Series, Helena Sarin, 2018

Step 3: Use CycleGAN to increase the image resolution. This step involves a lot of trial and error, especially around what images are in the target domain data sets (CycleGAN model is trained to do an image-to-image translation, i.e., images from the source domain are translated to the target domain). This step could yield images to stand on their own, like Stand Clear of the Closing Doors Please or Harvest Finale. 

Harvest Finale, Helena Sarin, 2018

Harvest Finale, Helena Sarin, 2018

Step 4: Many of SNGAN-generated images might have a striking pattern or interesting color composition but lack enough content to stand on their own. The final step then is to use such images as part of the collage. I select what I call an anchor image of high resolution (either from step 3 or from some of my cycleGANned drawings). I also developed a set of OpenCV scripts that generate collages based on image similarity, size, and position of anchor images with SNGAN images setting up the background. My favorite examples are Egon Envy or Om.

Om, Helena Sarin, 2018

Om, Helena Sarin, 2018

This process, as often with concept art in general, carries a risk of getting a bit too mechanical - the images might lose novelty and become boring so it should be applied judiciously and curated ruthlessly. The good news is that it opens new possibilities - the most exciting directions I started exploring recently are using GAN outputs: 

  • As designs for craft, in particular for glass bas-reliefs. Thanks to semi-abstraction and somewhat simplified rendering of often exuberant colors and luminance they might exhibit organic folksy quality. Many generated images could be reminiscent of patterns of the Arts & Crafts Movement. It’s still early in the game to share the results, but I showed images such as in this set to experienced potters and glassmakers and got overwhelmingly enthusiastic responses (Surfaces and Stories).

  • In what I call “computational non-photography” - layering and remixing generated images to create new ones. Indian Summer or Latent Underbrush are examples of this technique.

Latent Underbrush, Helena Sarin, 2018

Latent Underbrush, Helena Sarin, 2018

Conclusion

Even with the limitations imposed by not having a lot of compute and huge data sets, GAN is a great medium to explore precisely because the generative models are still imperfect and surprising when used under these constraints. Once their output becomes as predictable as the Instagram filters and BigGAN comes pre-built in Photoshop, it would be a good time to switch to a new medium.

Subscribe To The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Comment

AI Artist Gives “Perfect” TED Talk As Cyborg

November 18, 2018 Jason Bailey
unnamed-5.jpg

There has been a lot of talk about AI (artificial intelligence) in the media over the last few months, but there are still a lot of signs that most people don’t really understand it. For example, 90% of people polled in a recent survey believe that “up to half of jobs would be lost to automation within five years.” Of course, experts in AI find this idea laughable.

In an effort to help people better understand AI, we at Artnome are doing a series of interviews with artists who are exploring AI’s potential through art. In this post, we speak with artist Alexander Reben, whose work focuses on human/machine collaboration using emerging technologies. He sees these technologies as new tools for expression, and most recently trained an AI to give a TED Talk through a robotic mask on stage.

The Perfect TED Talk?

What if you could give the perfect TED Talk? How would you prepare for it? You could try to watch all 2,600 previous TED Talks that have been given on the main stage, but at around 18 minutes each, it would take you a month (watching day and night). Even if you could watch all the talks, as a human it would be hard to find the underlying structure or algorithm that makes a great TED Talk across such a large data set. However, this type of repetitive behavior and pattern identification is exactly what computers and AI do well.

And what does a TED Talk trained on all the previous TED Talks sound like? Well, it is actually pretty hilarious and entertaining.

For this TED Talk, Reben wrote an algorithm to break down the scripts for every past TED Talk given on the main stage into four roughly equal sections. “I basically hoped that the beginnings, middles, and ends of TED Talks would be somewhat similar,” said Reben of his approach. He then used this data set to train an AI to create multiple outputs for each of the four sections, selected the outputs he liked best, and combined them for his final presentation.

My goal was to make a three-minute talk so it would be Youtube length, so that was the constraint in terms of how many slides I was going to use. I wasn’t going to make a full 20-minute talk because you kind of get the point after several slides.

For the images on his slides, Reben used a separate algorithm that read the script for his talk, ran a search based on the content, and then choose relevant images from the internet. Even the positioning of the images on the slides was handled by a computer, as it was done by PowerPoint’s automated slideshow maker.

I asked Reben how he thought the AI performed as a collaborator in developing the script for the TED Talk.

The AI definitely can learn and perceive patterns at a scale that we don’t. This is something that feels like a TED Talk, but is not really a TED Talk. AI are good at making things that “feel” like things in the generative sense. It picks up on the soul of the data set - if we can call it that - and it can reproduce that soul. But it can’t pick up the creativity or make something coherent. I think that is why a lot of people are interested in AI art, because it is like seeing a pattern that is not really there and our brains try to fill in the gaps. It’s Frankenstein-ish.

If Reben’s AI collaboration is Frankenstein-ish, it is closer to Gene Wilder and Peter Boyle in Young Frankenstein than Victor Frankenstein and his menacing (if not misunderstood) monster originally depicted in Shelley’s novel.

Puttin_on_The_Ritz.gif

Watching people try to train machines with limited capabilities to act like humans is… well… funny. Reben knows this very well. He entertains and educates us by essentially collapsing the gap between the public’s perception of AI as “job killer” and its current and actual capabilities as a fledgling tool that is heavily dependent on collaboration and direction from humans to perform even the most basic of tasks.

What Hollywood and the mainstream media have failed to teach us is that AI is currently like an infant: growing fast, but not very knowledgeable or skilled. Where most people play up (or down) AI’s capabilities, Reben has put AI out there in all its awkward glory on the most important stage of our time - the TED stage.

That said, Reben’s AI does a more respectable job at identifying patterns in TED Talks than we may pick up on with a single viewing. Let’s look closer at some of the more subtle, but repetitive and formulaic elements of a TED Talk that it was able to surface.

Screen Shot 2018-11-15 at 3.08.10 PM.png
  • Start with a shocking claim: “Five Dollars can Save Planet Earth.”

Screen Shot 2018-11-15 at 3.10.36 PM.png
  • Establish personal expertise and credibility: “I’m an ornithologist.”

Screen Shot 2018-11-15 at 3.12.32 PM.png
  • Strike fear into the heart of the audience: “Humans are the weapons of mass destruction.”

Screen Shot 2018-11-15 at 3.16.04 PM.png
  • Offer a solution based on novel technology: “A computer for calibrating the degree of inequality in society.”

Screen Shot 2018-11-15 at 3.28.21 PM.png
  • Make a grandiose and unfounded proclamation: “Galaxies are formed by the repulsive push of a tennis court.”

Screen Shot 2018-11-15 at 3.31.25 PM.png
  • Use statistics and charts to back it up: “Please observe the chart of a failed chicken coop.”

Screen Shot 2018-11-15 at 3.33.07 PM.png
  • Give a simplified and condescending example for the “everyman”: “It’s just like this, radical ideas may be hard for everyone to register in their pants. Generally. And there’s a dolphin.”

Screen Shot 2018-11-15 at 3.35.20 PM.png
  • Talk about the importance of your idea moving forward: “This is an excellent business because I actually earn one thousand times the amount of life.”

Screen Shot 2018-11-15 at 3.37.03 PM.png
  • Describe potentially discouraging setbacks and roadblocks: “Let’s look at what people eat. That’s not very good food.”

Screen Shot 2018-11-15 at 3.39.39 PM.png
  • Give promising evidence these barriers can be overcome: “We will draw the mechanics and science into the circuitry with patterns!”

Screen Shot 2018-11-15 at 3.41.55 PM.png
  • Provide the audience an important takeaway question: “What brain area is considered when you don’t have access to certain kinds of words?”

Screen Shot 2018-11-15 at 3.43.55 PM.png
  • End with a pithy and folksy conclusion: “Sometimes I think we need to take a seat. Thank you.”

Don’t get me wrong. I’m a TED Talk addict. In fact, I use many of the devices and formulas in TED Talks in my own writing. But following formulas in writing does not make me a great writer any more than learning the Macarena makes me a great dancer. Reben’s talk is giving us a cautionary tale: formula is the enemy of great public speaking, not the recipe for it.

Reben’s performance embodies this by delivering a presentation that is literally comprised of every TED Talk ever given, but which stylistically breaks from all known Talk formulas through the use of a machine-written script and the delivery through a robotic mask. Reben is not giving a TED Talk to give a TED Talk; he is co-opting the TED stage and TED brand as cultural material for creating his own art.

The results are hilarious, but there is also a double-edged realization that there is something fundamental about great public speaking that cannot be reduced into easy-to-learn formulas and thus replicated by AI. You may become a better speaker with practice, but it is human creativity and charisma that make a great presentation (even if many of us wish it were more formulaic and therefore attainable).

Reben’s TED Talk is only the latest in a long line of compelling works where he trains machines to perform human-ish activities that are ultimately void of real meaning or substance. His machines are like small children going through the motions of baby talk, making noises and gestures without quite understanding what they mean yet, and it is both haunting and beautiful to watch.

We see this duality in Reben’s Synthetic Penmanship project for which he put in thousands of samples of handwriting into a model and then got a robot with pens to try to make language. This model knows nothing about language, but you can see that it tries to create shapes that look like characters used in written English.

NkXvl56 - Imgur.gif

As Reben describes it:

All it knows is shapes. Here it tries to make this pseudo language thing, which is endlessly fascinating to me. It’s a computer trying to be human, but it doesn’t know anything about humanity or why it’s making these shapes. It just becomes so beautiful. The action of the robot doing it with these real pens is just this meditative human/machine spiritual connection.

20171209_164958-1.jpg

My favorite work by Reben extends this idea of human-esque communication by training AIs on celebrities’ voices, and in some cases, adding them to video. Reben fed celebrity voices into an algorithm, training it to make sounds that sounded like celebrities, but similar to his synthetic penmanship project, they had no content of English in them.

The celebrities include John Cleese, George W. Bush, Stephen Colbert, Barack Obama, and Bob Ross. According to Reben, Ross in particular had a very “familiar deep tone to his voice.” He found it extremely interesting how the machine could mimic the soul of someone’s voice without understanding language whatsoever. “I could have fed it in the sounds of train horns and it would have done the same thing,” but this did it with language.

At that same time, Reben had been investigating the Deep Dream algorithm which people had been applying to photos and videos. It dawned on him that combining his AI speech patterns from celebrities with Deep Dream video could be pretty interesting. He chose Bob Ross because the act of someone painting was itself a creative process. Not only that, but as you are painting, you are building up an image, and that is something the Deep Dream code can really mess with. “As you add more and more shape to this image, the things it was seeing would get more and more complicated,” said Reben.

When I first saw Reben’s Deeply Artificial Trees go viral last year, it immediately hit me that Reben was collaborating with algorithms to produce an homage to the most prolific but oft misunderstood algorithm artist of all time. The high-art world shuns paintings by Bob Ross and his students because they are formulaic by design. Ross sought to make it easy enough for anyone to feel like a painter, and did so by breaking it down into simple steps. But like Reben’s AI TED Talk and Synthetic Penmanship project, Ross’ paintings have the formula of painting down. However, they lack the depth of thought and exploration we seek in works by master painters. They are in some senses robotic.

Ross is rejected by the traditional art world as irrelevant because if his paintings can be made so simply that anyone can produce them, then they are no longer special. This same line of thinking is of course important when considering generative or algorithmic art made in collaboration with AI. The problem being that if a computer can produce a work in a certain style, nothing should be stopping it from producing thousands or millions more similar works in short order.

For example, when I first saw a Deep Dream image, I was in love. I thought this algorithm rivaled the work of great surrealists like Salvador Dali. But then I discovered an app where I could easily make my own Deep Dream image from any photograph. At first it was intoxicating, and I made like twelve in the first few hours. But once I realized how easy it was the achieve the Deep Dream effect and how everyone now had access to it (it was all over Instagram), it lost its novelty.

Reben points out that he was exploring Deep Dream pre-appification of the algorithm, but also maintains a healthy attitude towards it.

Back then there weren’t as many tools out there as there are now to use Deep Dream. It really was a tool kit that was extremely interesting to me, like a new type of paintbrush. And yeah, the first time paint was invented, if you did anything with paint it was probably pretty amazing, but after a while, paint just became a tool like any other. Deep Dream is now available as a push-button filter, it is just a tool like any other and you can use it to do boring stuff or interesting things. The first few people who use any tool will get a lot of attention because it is so novel and new, but after that first wave, you have to do something really interesting with that tool set to move forward.

The brilliance of Reben’s Artificially Deep Trees is that it really is a human/machine collaboration rather than just the machine itself doing its thing. Reben creates the audio and curates the imagery resulting in something artful and engaging (as evidenced by its popularity). The magic for me here is that Reben combines two formulas for producing mass-produced kitsch imagery, Deep Dream and Bob Ross, and somehow creates a unique and compelling artwork from them. It is Reben’s human contributions to framing, curating, and editing the work, and not the execution of Ross’s or Deep Dream’s algorithms, that makes this a relevant and timeless work of art. As with Reben’s TED Talk, Deeply Artificial Trees highlights the idea that human creativity is irreplaceable, essential, and currently undervalued as we slowly march into a world of increased automation. As Reben describes it: 

AI is really a tool that a human is using rather than the machine itself having all the creative power to it. My curation of the material Deep Dream is applied to, my curation of the voice, my curation of the edit — all those things were human, that was my side. Then there is a computer side, as well. Everyone is worried that AI is going to replace humans, but I think we are really going to see a human/computer collaborative future.

While I see several clear threads running through all of Reben’s work, I was eager to capture more of his thoughts, so I asked Reben to help me summarize the ideas, themes, and explorations behind his projects, starting with the TED Talk:

It has a structure of a pattern, the same way that the Stephen Colbert sounds like him but has no meaning, the handwriting makes English that has no meaning. This thing made a TED Talk that has no meaning. It has the structure of what it is. Our brains are pattern-recognizing machines. When there are patterns, that is soothing to our brains, but when there’s no content, it creates a dichotomy. I think that is what pulls people in and then scares them a bit. It’s a little bit grotesque.

I followed up by asking Reben if there are certain things that will always be in the domain of human creativity, like the charisma required to give a great TED Talk.

The real question is can you find a data set of “charisma.” What does that look like? It may not even be a tech problem so much as it is a training problem of a model. With a good enough data set, who knows what you could do, especially given how fast this stuff is moving. I think we’ll probably get good enough at some point to fake a lot of that, is my guess. So then maybe the more philosophical question is, is it better to have a human to do it than a machine if you can’t tell the difference? Is there inherent value of humanity in creativity versus something that’s just algorithmically made? If there is a distinction, that is a very human distinction; it’s not a very practical distinction. Meaning, if you can’t tell the difference, then it doesn’t actually matter.

An interesting thought experiment I came up with when I was talking to a philosopher was, “We can make a GAN to generate, say, comedy. Can we make a GAN to generate a new genre that we’ve never thought about before?” If you zoom out, “Can AI invent, say, a new academic topic that our brains as humans have never thought of before?” Keep zooming out and ponder if AI can create a new way of thinking. So if we don’t make an AI that could be, say, charismatic… maybe an AI will come up with something that we can’t actually understand which is a description of the world that is as complicated as charisma but is something that is completely different and unique to a computer. But yeah - a lot of stuff right now comes down to good data sets too.

What I love about Reben’s response here is that it highlights a quality that all of my favorite AI artists share: a sober and realistic appraisal of the current AI capabilities which does not dampen their imaginations for a fantastic future for AI that is near limitless.

In addition to his initial curated AI TED Talk, Reben is working on 24-hour autonomous TED Talks that will use text to speech. Be sure to keep an eye out for them. Other new works from Reben that I encourage you to check out include his AI Misfortunes (some great examples shown below). These fortunes are made by an AI which learned from fortune cookies which Reben describes as producing a type of “artificial philosophy.” He then curates phrases from an AI and chooses typography and colors to help amplify and emphasize the message before producing physical posters as the final step of the collaboration.

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

Also check out this sneak peak of Reben’s balletic three-hour piece of people training AI by showing their web cams their shoes. The ultimate in human/machine collaboration. It is really a 3D scan, which is why they are rotating everything, but Reben took the color images and re-assembled them back into videos. “It struck me as a type of performance people were doing to train an AI,” shared Reben. “I did not quite get it at first, but an hour later I found myself hypnotized.”

Reben is represented by the Charlie James Gallery in LA and has several shows coming up in 2019. If you are near any of these venues, I highly encourage you to check out his new work in person and meet Alexander if you have the chance.

  • Vienna Biennale, Vienna, Austria

  • stARTup Art Fair Special Project, Los Angeles, CA

  • V&A Museum of Design, Dundee, Scotland

  • MAK Museum, Vienna, Austria   

  • Museo San Telmo, San Sebastián, Spain

  • MAAT, Lisbon, Portugal

  • MAX Festival, San Francisco, CA

  • Boston Cyberarts Gallery, Boston, MA

You can learn more about Reben’s latest works at areben.com. And as always, if you have questions, feedback, or ideas for articles for Artnome, you can always reach me at jason@artnome.com.

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!













































Comment

Inventing The Future Of Art Analytics

November 12, 2018 Jason Bailey
Heatmap, Artnome, 2018

Heatmap, Artnome, 2018

This week, Christie’s will be auctioning one of the most important collections of American art, the Barney A. Ebsworth collection. The collection is valued at $300M and is brimming with work by artists like Georgia O’Keeffe and Edward Hopper, whom most of us feel like we know pretty well. But what do most of us really know about these artists from a quantitative perspective? The answer is not very much.

In this article on inventing new art analytics, we:

  • Outline a new approach to descriptive art analytics using Artnome’s database of artists’ completes works.

  • Chart a never-before seen view of Georgia O’Keeffe’s full body of work.

  • Share Artnome’s data scientist Kyle Waters’ early approach to predictive analytics using a random forest machine learning model.

  • Make predictions on four works from the Ebsworth collection going to auction this week.

  • Share artists’ price history, performance, and comps provided by our good friends at MutualArt.

Descriptive Art Analytics

Traditionally, art analytics are derived exclusively from auction databases, but only a fraction of an artist’s complete works ever make it to auction. What about the rest of the works? Should we pretend like the majority of works by an artist don’t exist when doing art analytics simply because it is convenient? I don’t think so.

In art, scarcity and uniqueness of a work drive much of its value. But without a database covering many artists’ complete works, neither scarcity nor uniqueness can be calculated. Most experts would be hard pressed to tell you how many oil paintings a popular artist like Georgia O’Keeffe made, fewer could tell you if that number is high or low compared to other artists, and none could tell you the average size of her paintings.

Made with Flourish

Over the last three years, Artnome has spent thousands of hours and tens of thousands of dollars building the world’s largest database of complete works by blue chip artists to help answer these (and other) questions. In this article, we pull from that database to provide descriptive statistics that paint a macro view of the artists’ complete works. This “big picture” in turn allows us to quantify what makes any singular work “unique” or “scarce” relative to the full body of work.

For example, Georgia O’Keeffe has 2,076 works listed in her official catalogue raisonne. The graph below gives of a breakdown of O’Keeffe’s complete works by primary media. It then breaks down the 810 oil paintings by substrate. We then look at all the oil paintings by whether they are listed in public or private collections. We can speculate works by artists with a low percentage of privately held work are less likely to come to auction and are therefore more scarce.

Made with Flourish

Below we show O’Keeffe’s complete oil paintings by width and height. We then further break it down by showing average surface area and total surface area painted for each year she was active. Contextually, an otherwise small work may be large for its year and vice versa. Not only is it interesting to see how the artist’s working habits evolved over time, but our model also shows how dimensions correlate to sale price at auction.

Made with Flourish

Why collect all this data? We believe the “Holy Grail” of art analytics will stem from a database of complete works enriched with auction data. We see the potential to harvest meaningful data sets from the images themselves and have already started experimenting towards that end using off-the-shelf solutions for identifying and searching objects shown within paintings.

Predictive Art Analytics

For our first attempt at predictive art analytics, Artnome data scientist Kyle Waters trained a random forest to make predictions on pricing using data across several dozen artists. The random forest is a popular machine learning model that uses many decision trees and makes predictions by averaging predictions from component trees. In general, the random forest is more accurate in its predictions than a single decision tree. The model learns basic relationships from training data to then predict new outcomes.

Random Forest Simplified

Random Forest Simplified

Machine learning is an exciting new tool that could help improve art analytics. However, what many people fail to realize is that a machine learning model is only as good as the quantity and quality of the data it is trained on. We believe this gives Artnome an advantage. Because our database covers complete works and not just those that have gone to auction, we have a larger data set from which to train the model.

Again, because we have the complete works in our database, we can also create estimates for all of an artist’s work, not just those that happen to be at auction at any given moment. For this reason, we like to think of ourselves as the “Zillow of blue chip art.”

Our pricing model is admittedly in its early stages and has lots of room for improvement. For example, our current model performs poorly on the works that typically sell for the most (often the ones that are also getting the most public attention).

Chop Suey, Edward Hopper, 1929

Chop Suey, Edward Hopper, 1929

Works from the Ebsworth auction like Hopper’s Chop Suey and Pollock’s Composition with Red Strokes are masterpieces. This means they carry “masterpiece” price tags. Both works are estimated to sell for roughly 10x the artist’s average sale price at auction (since 2000). Like the largest mansions on Zillow, these masterpieces are the hardest prices to predict using historical data because there simply aren’t that many of them. Additionally, there are a limited number of buyers who can afford them, which makes it that much more difficult to predict a hammer price at auction.

Human experts also struggle with predicting prices for top works. Just this week, Van Gogh’s Coin de jardin avec papillons failed to sell at $30M despite estimates around $40M prior to the auction. As Christie’s CEO Guillaume Cerutti shared with The Wall Street Journal’s Kelly Crow, “The air is just thin at that price.”

Coin de jardin avec papillons, Vincent Van Gogh, 1887

Coin de jardin avec papillons, Vincent Van Gogh, 1887

Though we struggled with the most expensive works, as you will see, our model did a very respectable job at predicting prices for works estimated at $3M or less, as we have a high volume of relevant data in our model for works in this price range. Our predictions are so good for work in this range that they may actually seem boring at first. Our model essentially came up with the same estimates as the experts at Christie’s. We are thrilled at these early signs that we could potentially automate pricing estimates at scale for an artist’s complete works using a machine learning model.

Selected Artworks for Analysis

We selected four works from the Ebsworth collection going to auction this week at Christie’s for analysis based on strength and availability of data from our database.

  • Horn and Feather - Georgia O’Keeffe, 1937

  • Cottages at North Truro - Edward Hopper, 1938

  • My-Hell Raising Sea - John Marin, 1941

  • Long Island - Arthur Dove, 1940

We compare Christie’s estimates to our estimates from the Artnome prediction model for each of the above works. You can see the correlation between variables in our model in the heat map below. (You may recognize this heat map as the feature image for this article. I thought it looked like a rather nice modernist painting, so I stripped off the annotations and repurposed it as art.)

Correlation Heatmap.PNG

We have also partnered with our good friends at MutualArt who offer access to auction prices and data on over 300,000 artists as part of their services. MutualArt’s insight analyst Kate Todd generously prepared pricing trends for the artists we cover in this article, as well as comps for the individual works we will be analyzing from the Ebsworth auction.

Georgia O’Keeffe - Horn and Feather

Horn and Feather, Georgia O’Keefe, 1937

Horn and Feather, Georgia O’Keefe, 1937

Georgia O'Keeffe (1887-1986)
Horn and Feather 
oil on canvas
9 x 14 in. (22.9 x 35.6 cm.)
Painted in 1937

Christie’s Low/High Estimate: $700,000 - $800,000
Artnome Model Estimate: $720,000

Made with Flourish

Above: The average lot value for works by Georgia O’Keeffe. Note the spike for 2014 when her Jimson Weed/White Flower No. 1 sold for $44.4M at Sotheby's.

Jimson Weed/White Flower No. 1 , Georgia O’Keefe, 1932

Jimson Weed/White Flower No. 1 , Georgia O’Keefe, 1932

O’Keeffe’s Horn and Feather is a lovely work, but few would confuse it as a masterpiece like Jimson Weed/White Flower No. 1. The estimate from Christie’s (as well as the Artnome estimate) reflect this. In fact, as a lifelong O’Keeffe fan, I’m not sure I would be able to identify Horn and Feather as O’Keeffe’s work out of context. It lacks the magnified, heavily cropped composition that is O’Keeffe’s signature treatment of small objects. Instead, the two objects float in a relatively passive sea of negative white space.

Our friends at MutualArt provided us with a great comp for Horn and Feather. Shell (Shell IV, The Shell, Shell I), painted in 1937 (the same year as Horn and Feather) sold at Sotheby’s last year for $1,515,000, which is 78% above its estimate.

Shell (Shell IV, The Shell, Shell I), Georgia O’Keefe, 1937

Shell (Shell IV, The Shell, Shell I), Georgia O’Keefe, 1937

Though it shares a similar subdued color palette, I think Shell is a superior work as it exhibits the cropping and use of negative space we expect from a work by O’Keefe. While this is a subjective observation on my part, Artnome believes these types of observations are also quantifiable and we are working toward that end.

If you are a regular Artnome reader, then you know that I believe all paintings by female artists are currently undervalued (research shows by as much as 47%) and worth investing in. As a data point, O’Keeffe (who may be the best-known female painter of all time) has an average lot value of $2,340,715 for paintings, far below that of Edward Hopper, her male contemporary, whose average lot value is $8,963,652 (2000-present). For this reason, I always root for O’Keeffe and other female artists to out-perform their estimates. That said, I will be rooting for Horn and Feather.

Edward Hopper - Cottages at North Truro

Cottages at North Truro, Edward Hopper, 1938

Cottages at North Truro, Edward Hopper, 1938

Edward Hopper (1882-1967)
Cottages at North Truro 
signed 'Edward Hopper' (lower right)
watercolor and pencil on paper
20 x 28 in. (50.8 x 71.1 cm.)
Executed in 1938.

Christie’s Low/High Estimate: $2,000,000 - $2,500,000
Artnome Model Estimate: $2,220,834.00

Made with Flourish

Above: The average lot value for works by Edward Hopper.

As a painter trained in both watercolor and oils, I see Hopper as every bit an accomplished watercolorist as he is a masterful painter with oils. So while works on paper generally fetch less than oils on canvas, I would not at all be surprised if Hopper’s Cottages at North Truro achieved the $2,220,834 estimate from our machine learning model.

While Hopper’s works on paper average just $318,554 at auction (since 2000), superior works can sell for much higher sums. In 2001, Charleston Slum, a Hopper watercolor on paper, sold for $1,876,000 at Christie’s on an estimate of $500,000 - $700,000.  

Charleston Slum - Edward Hopper, 1929

Charleston Slum - Edward Hopper, 1929

Our friends at MutualArt provided two additional comps below, both of which brought in significantly less at auction than Charleston and less than our estimate for Cottages at North Truro.

Vermont Sugar House, Edward Hopper, 1938

Vermont Sugar House, Edward Hopper, 1938

Shacks at Pamet Head, Edward Hopper, 1937

Shacks at Pamet Head, Edward Hopper, 1937

Hopper’s Vermont Sugar House sold at Christie’s in 2007 for $881,000 and Shacks at Pamet Head sold at Sotheby’s in 2004 for $702,400. Both works exceeded estimates of $500,000 - $700,000. It will be interesting to see if Cottages at North Truro can rally past these prices to meet our estimate.

John Marin - My-Hell Raising Sea

Screen Shot 2018-10-31 at 3.54.59 PM.png

John Marin (1870-1953)
My-Hell Raising Sea
signed and dated 'Marin 41' (lower right)--inscribed with title (on the reverse)
oil on canvas
25 x 30 in. (63.5 x 76.2 cm.)
Painted in 1941.

Christie’s Low/High Estimate: $250,000 - $350,000
Artnome Model Estimate: $803,372.00

Made with Flourish

Above: The average lot value for works by John Marin.

And finally, a prediction from our model outside of the range of Christie’s own estimates. Our model likes this painting. Even though the trends suggest that the market for Marin may be headed downward, our estimate has it at $803,372, over twice Christie’s middle estimate of $300,000.

I also don’t have access to a condition report for My-Hell Raising Sea, but it does look like there may be a crease of some sort on the right sight of the canvas. Condition is likely the most important variable missing from our model, and we are actively seeking solutions to resolve this moving forward.

Marin was among the first American artists to paint abstracts and is a bridge between figurative painters and the abstract expressionists. For that reason, works that highlight his tendency toward the abstract like Sailboat, Brooklyn Bridge, New York Skyline (which sold for $1,248,000 in 2005) have done well.

Sailboat, Brooklyn Bridge, New York Skyline, John Marin, 1934

Sailboat, Brooklyn Bridge, New York Skyline, John Marin, 1934

As a lifetime New Englander who is happiest on the northern shores of Maine, I strongly prefer Marin’s seascapes - they capture that landscape as well as any other painter, Winslow Homer included. But our model does not care about my fondness for the Maine seacoast.

The comps from MutualArt suggest that Christie’s experts have it right on this one and that our model may be too high. But we are of course standing by the the estimate from the model.

Two Sloops on a Squally Sea, John Marin, 1939

Two Sloops on a Squally Sea, John Marin, 1939

Our first comp, Marin’s Two Sloops an a Squally Sea, sold at Sotheby’s in 2016 for $212,500. While it exceeded its own estimate of $120,000 - $180,000, it fell well short of our $803,372 estimate for My-Hell Raising Sea.

Cape Split, Maine, John Marin, 1945

Cape Split, Maine, John Marin, 1945

And our second comp, Cape Split, Maine, had an estimate of $400,00 - $600,000, but failed to to find a buyer at auction just a year ago at Sotheby’s.

Arthur Dove - Long Island

Arthur G. Dove (1880-1946)
Long Island
signed 'Dove' (lower center)
oil on canvas
20 x 32 in. (50.8 x 81.3 cm.)
Painted in 1940.

Christie’s Low/High Estimate: $1,00,000 - $1,500,000
Artnome Model Estimate: $2,801,572

Like Marin, Arthur Dove is among the earliest American abstract painters. His works are simpler abstractions, and I mean that as a compliment. They have an organic feel that is supported by the use of an earthy palette.

Long Island is not a particularly sexy painting, even for Dove, but it has grown on me. It is unmistakably Dove in its pared-down composition, which features a nice balance of the sun (or moon) dwarfed by two massive monolithic forms resting on wave-like dunes. Our prediction model liked the piece more than I do, pricing it at $2,801,572, well above Christie’s estimate.

I personally much prefer the comp sent to us from MutualArt, Dove’s 1941 Lattice and Awning.

Lattice and Awning, Arthur Dove, 1941

Lattice and Awning, Arthur Dove, 1941

Lattice and Awning last sold for $1,685,000 against an estimate of $1,200,000 - $1,800,000 in 2013 at Sotheby’s. I believe it to be a stronger composition than Long Island, but my data suggests Dove may not have made that many paintings compared to the other artists in my database, with just 459 works listed in his catalogue raisonne. If this is the case (I think an expanded catalogue is in the works), it may drive up the desirability of Long Island.

Moving Forward

In the future, we plan on leveraging deep learning to harvest data from the images themselves. We believe using the image data is useful for predictive accuracy because we can then detect things like color, subject matter, artistic style, composition, etc. These are all variables that people clearly visualize and use to determine the price of an artwork, but that have not been quantifiable in a scalable and manageable way. Until now.

We also have some early thoughts on how to improve detection and prediction of masterpieces as outliers in our model. One idea is to include data on exhibitions from the top museums and galleries. If we have data showing that several works from a single exhibition or combination of exhibitions has led to dramatic increase in sale price, other works that were in those same exhibitions may also receive a bump from the model. Recent research suggests the number of institutions that are influential in establishing the value of art and artists is relatively small, so this may be a fairly manageable undertaking. What I like about this approach is that we are essentially factoring in the good judgment of the best curators of the last 100 years in a quantifiable way for our model.

Summary and Conclusion

In this article, we looked at new descriptive analytics driven by Artnome’s complete works database that gave us a unique view of O’Keefe’s complete works. We then used our data to explore predictive analytics around auction prices for several pieces from the Ebsworth collection that will be going to auction this week. We also provided further context with performance history and comps thanks to our friends at MutualArt.

We think there is a ton of low-hanging fruit when it comes to applying modern analytical tools and practice to art and the art market. In addition to building better prediction models, improving available data on art and artists helps us understand these works in a new light and provides a much-needed barrier against forgery.

At Artnome, we are looking to onboard three to five clients in the next few months who are interested in benefiting from early access to our prediction model and the insights from our unique database. We would ideally like develop a long-term relationship with a few key clients as we grow the strength of both our one-of-a-kind database and our machine learning driven models. I can be reached at jason@artnome.com.

For those looking for a more mature solution, MutualArt offers both full advisory services including authentication as well as self-serve tools for auction data and analysis. While there are dozens of data and analytics providers to choose from, we like Zohar and his team at MutualArt because they share our vision of data and machine learning leading to better analytics and a stronger art market.

Subscribe To The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


Comment
← Newer Posts Older Posts →
Get The Newsletter
Thank you!
Blog RSS

You Might Also Like:

Featured
Primary_Image (1).png
Field Guide - Imagined Specimens and Ecosystems
Read More →
Vanishing_NFT.png
Back Up Your NFT Art or It Could Disappear
Read More →
Blake.png
Why Museums Should Be Thinking Longer Term About NFTs
Read More →
Screen Shot 2021-06-02 at 3.49.24 PM.png
GreenNFTs Hackathon Brings New Ideas, Awareness, and Solutions
Read More →
Screen Shot 2021-05-23 at 10.31.52 AM.png
Constructive Instability - The Art of Lucas Aguirre
Read More →
Museum_NFTs.png
What Makes a Museum Object NFT Valuable Beyond the Scope of the Technology?
Read More →
A sample of the highest selling NFTs on the SuperRare marketplace
In Search of an Aesthetics of Crypto Art
Read More →
Hic Et Nunc Brings True Spirit Of Web Art To The Here And Now
Hic Et Nunc Brings True Spirit Of Web Art To The Here And Now
Read More →
newplot (6).png
Who Is In Your SuperRare Network?
Read More →
TWOSOLDIERSATWARI.png
Artists Rally to Support #EndSARS
Read More →
mint_3.png
Interview with Generative Artist Kjetil Golid
Read More →
12647ec4427e16c13b1a19fda327b7f2.jpg
Interview With Generative Artist Jared Tarbell
Read More →
complex2.jpeg
The Game of Life - Emergence in Generative Art
Read More →
How_To_Become_A_Successful_Artist_Warhol.png
How To Become A Successful Artist
Read More →
Can Machine Learning Predict the Price of Art at Auction?
Can Machine Learning Predict the Price of Art at Auction?
Read More →
2020 Art Market Predictions
2020 Art Market Predictions
Read More →
Screen Shot 2020-01-12 at 12.27.04 PM.png
Artnome - 2019 Year in Review
Read More →
Augmenting Creativity - Decoding AI and Generative Art
Augmenting Creativity - Decoding AI and Generative Art
Read More →
Tabula Rasa - Rethinking the intelligence of machine minds
Tabula Rasa - Rethinking the intelligence of machine minds
Read More →
Can AI Art Authentication Put An End To Art Forgery?
Can AI Art Authentication Put An End To Art Forgery?
Read More →

POWERED BY SQUARESPACE