Reteo's Ramblings

Just Stuff that Comes to Mind

First Posts and Artificial Intelligence

Blank piece of writing paper with a pencil atop it.
Where do I begin?

First Posts

It’s a well-known fact that your first anything is going to be poor. You can’t help it; there are no expectations to build from, and you have no idea what you are going to do. That being said, some people have a topic that they want to tackle, and they are all prepared to do so. However, even in this case, there’s still the matter of the first thing to note.

For a “anything and everything blog” like this, I don’t have that preparation, and add to this the fact that I’m very much not used to sharing my thoughts in general. All the same, I would like to try, and there’s no time to begin like the present.

Artificial Intelligence

There’s been a lot to say in all areas about the use of Artificial Intelligence (AI) these days. People who write, draw, film, etc. have grievances about the way their works have been used to train AIs, and as we speak, these issues are being hashed out in the legislative bodies as to what constitutes ethical boundaries.

This may be somewhat antagonistic, but I tend to question the legitimacy of “intellectual property.” The idea that “if you thought it, you own it” is rather offensive to me, because it says “if I think of something first, then you are not allowed to think of it afterwards.” This is not how the human mind works, and the practice of forcing people to toe an invisible line is, while not explicitly slavery, something in the general vicinity of coercion. Then again, we’ve seen where the alternatives led, including the Guild System, where practitioners banded together to keep their knowledge secret from the larger population, as well as the isolation of keeping one’s thoughts hidden until it dies with its thinker. Not really a way for knowledge to spread. So I can understand why intellectual property is a big deal, even if I find it questionable.

Either way, there’s a reason that copyright is all about the “expression” of a creation, and not the “style” of the creation. I doubt Jackson Pollock could sue toddlers for mimicking his work, just because he had the “original” idea of splattering paint on a canvas. Yeah, I’m not really impressed with his work. And that should be considered… AIs do not just “mimic” work based on the creations of an artist, they are every bit as “inspired” as those done by another artist. The creations are similar in style, but not the same expression as the original artist.

There is also the debate between those who use AIs and those who refuse to over the question of originality. “AI is simply copying what it’s trained on,” they say. “It is not making anything original.” But consider the question, what exactly is happening with art students, who spend their initial semesters in college going over the history of art and learning from the “greats?” They are being trained on the original artists before they ever get to the point where artistic media comes into the picture. They are then using what they learned to create their own works, drawing on what they used to learn.

Additionally, artists the world over use “reference images” in order to produce artwork in order to ensure correct proportions, angles, coloring, and positioning. How does this not apply to “copying others’ work?” All in all, people are accusing AIs of not being artistic, because they do the exact same thing that humans do.

“But,” I can hear some say, “the human at least understands what they’re doing. AIs are not capable of this!” To this I would question how this is the case.

Just a little context, when I consider the human mind, I tend to see a cognitive map, made up of concepts connected together using associations. These are often associated with external stimuli, such as sights, sounds, smells, and sensations. These are gained through experiencing them using one (or more) of the four sensory systems: the eyes, the ears, the skin, and the olfactory bulb (responsible for both smell and taste). The mind encodes these things and associates them over time.

I don’t agree that people have a magical sense of “understanding,” it’s just the illusion caused by the concepts and associations collected in the brain, the same way that training data produces concepts and associations in model data. There is not a lot of difference; certainly not enough to immediately dismiss AI, especially when it can be useful in improving one’s own projects, whether it’s to outline plans, design images, or train students.

“Train students? What about hallucinations?” Good question. Think about this for a second: how many people do you know are holding onto false information because they refuse to consider alternatives? How many teachers are doing the best they can, but do not have all the information necessary to be correct? Heck, we live in a society that glorifies the celebrity, when their entire job is to pretend they’re someone else. Perhaps this gives us an opportunity to learn the most important lesson anyone should be learning… how to doubt what they’re told, come up with questions and tests for those doubts, and come to their own conclusions.

As far as I’m concerned, in the larger scheme of things, while they may work differently, AI and human output are not that different.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *