Saturday, August 31, 2024

AI, Art, and Some Musings

 I saw this post the other day discussing ways to prompt Midjourney (which I guess is one of those AI art generators?) to make more creative or distinct art, and it's been sitting in the back of my head a bit.

And then I stumbled across this post, in which a guy created a site called One Million Checkboxes, which I guess people were using to select and deselect to create art?

Anyways, he discovered some young programmers who were doing some interesting things with his site, and I suppose the connecting feature of these two disparate posts is they they both buck the conventional wisdom a bit.

Or rather, I have seen quite a lot of criticism and concern about using AI to generate art, but the post on how to adjust Midjourney seemed less like 'the AI does what a real person can do (poorly), and will destroy the ability of artists to make a living' and more like 'here's another way people can create things, but it still requires human involvement in choosing the right prompts and settings to get something you want.' Really, the post made it seem like this is just another way to create art. One that doesn't require you to have good hand-eye coordination and the ability to draw in fine detail, but distinct and different from other forms of art.

One Million Checkboxes was similar, in that of course the average user may be annoyed and frustrated that someone was taking over the area they were trying to create some sort of design in....

And again, there was the concern about bots (as the kids apparently programmed bots to do what they did). But the guy who built the site was just impressed as heck with what the kids pulled off. 

Which, tbf, does sound impressive.

So a shift in perspective, in both cases.

Though it doesn't mean that the original perspective is wrong, per se. 

Which got me thinking about art, and AI (or really machine learning. It's not what I would call AI, even if everyone else does), and what sort of guidelines I'd use for it.

See... I think computing/machine learning/etc can be complementary to human ingenuity, but you can't really let them run without human involvement.

Mostly because computers and programming and machine learning are very much a 'garbage in, garbage out' kind of process.

Computers are absolutely excellent at doing things we frail mortals are terrible at.

Like doing the exact same thing, one million times, in the exact same way. In two seconds or less.

Computers will follow step by step instructions, exactly the way they are programmed to.

If you want to make a calculation, as long as you program it correctly the program will run it perfectly every time. (Ignoring, if you will, that apparently cosmic rays can sometimes flip bits, and I'm sure there are other things that can introduce corruption. If you were able to do billions of calculations before some corruption happen, it's not significant enough to derail this discussion.)

And I love them for that, because I would abolutely hate having to do that myself. I, a human, if asked to perform the task a million times, will absolutely screw some of them up.

Maybe I was distracted by something, or had a bad day, or whatever... I would miss a step, or make a mistake in multiplication, or whatever.

On the flip side... the computer only does what it's programmed to do. It has no way of telling whether it's wrong. It's not going to run a calculation and say 'Oh, that can't be right. I was expecting something in the vicinity of 300 billion and this is off by ten orders of magnitude. Maybe I made a mistake.'

Your machine learning isn't going to say 'this must be junk data, and it's corrupting my results.'

Or maybe it can, if someone builds a dataset to train it on and comes up with some sort of criteria for assessing good and bad data.

Which is kind of the point.

You still need someone to do that. To figure out what datasets you need to use to train it on. To figure out what the criteria for a good result is. 

You need human involvement.

Which is something I've thought for a long time, actually.

In tech, well... given enough time and resources we can automate quite a bit. We are, in fact, overwhelmed with tools all meant to automate things and make our jobs easier... and we write our own script to do the same.

I had someone ask me once, if I was worried about automation taking over my job.

And no, no I am not.

Because all those tools? All those things designed to make our jobs easier?

They still need human involvement.

The app changes, they upgrade a server, and they have to update the tool to the new server. Or the requirements change. Or the app is now serving more and more customers and it needs scaled up to manage the workload.

All of these sorts of things require people who understand the system well enough to know how to update and modify it.

And... in some ways, all these tools really just add another layer of obfuscation. 

It's all great when it works, but the minute something doesn't? The minute you need someone to figure out why it's not working?

You need an actual, living person who understands the tool well enough to figure it out.

I am reminded, again, of what my father said a long time ago. 

That you will either use what you learn so much you memorize it, or not use it again and forget... and the important things are knowing how to think and knowing how to look things up.

To use a completely different metaphor to say the same thing - we don't need people who know how to start a car and drive. (Those are the users, for which we build automation)

We need people who know how to track down the source of the strange sound when we brake, or figure out why the car isn't starting when we turn the key.

Mechanics are not needed any less no matter how much auto technology has changed... and, in fact, the more complex and sophisticated the automobile the more important mechanics are when things go wrong.

I wound up focusing more on the automation part and less on the 'art' part, but I do think humanity and computing are complementary.

We decide what we want to create, what we want to see, what we consider good and bad... and then we build the tools and automation that can be used to help us do the actual work.



Sunday, August 11, 2024

Work Performance and the Milit6

Saw this interesting post that took off when some manager tried telling an employee that people who rig parachutes had to give 100% or people would die, and it sparked a discussion on what exactly the work requirements are for riggers. 

I read it mostly because I remember having a lot of riggers in my class at Airborne school, and I already knew that they had to be prepared to jump with any of the parachutes they rig. 

The additional points about how they are required to take breaks (as it is known the quality degrades after a certain amount of time), about not wanting them rigging parachutes if they're sick or otherwise compromised, and of course that there's quality control in place so that you're not dependent on one person always getting it perfect are... 

Actually pretty basic common sense. 

And yet ... 

Things that are fairly obvious in a job where people die if you get it wrong, are somehow not understood or accepted by business leaders today. 

Maybe it's the lack of true leadership experience? 

I don't really know the cause, but I am sometimes actually grateful for the things I learned about leadership from my time in the army. 

Somewhat related, I was talking to someone about that a few weeks ago and remembered a few other points I feel are often forgotten. 

Namely that, as a leader, the first question for when my people fail at something is 'did I set them up for success?'

Did I give them the training they needed? Were my expectations clear? Did they h6the resources to do the job? If it required coordination with another team or department, were there issues I needed to resolve? 

Sure, people are diverse and you might have some slackers or troublemakers, but in my experience over 90% of any issues can be resolved if you make sure you set them up for success. 

And if you did all that and they still aren't cutting it? You have the documentation you need to say they're a poor performer. (For that specific job, at least. People have different talents and some are not suited for the task at hand. It doesn't mean they're lazy or a bad employee, they're just not in the right spot).

I heard a famous coach say something similar, and I wish I remembered who it was and where.

Anyways, thinking of that reminded me of yet another issue with the 'whale' fallacy that seems to have grown big in tech.

Which is that really, if you think 10% of your people are 'whales' who do 90% of the work, then you need to figure out what you are doing that's blocking the other 90%.

Because there's nothing innately special about your 'whales'.