Okay, so today I’m gonna walk you through this little experiment I did. The title? “who has the most masters wins”. Sounds kinda cryptic, right? Well, lemme break it down.

Basically, I was messing around with some AI stuff, trying to figure out if, like, having a bunch of “master” models working together would beat out just one super powerful model. The idea popped into my head after reading some stuff about ensemble learning – you know, where you combine a bunch of weak learners to get a strong learner. Thought, “Why not try this with the AI models I have access to?”.
First, I had to gather my troops. I picked out a few different models that I had API access to. Nothing crazy, just some decent, publicly available ones. I made sure they were diverse enough – some were better at creative tasks, others were good at logical stuff. Tried to get a good mix.
Next up, I needed a challenge. I decided on a complex problem: generating a marketing campaign for a fictional new product. I figured this would test their creative writing, strategic thinking, and overall coherence. Think up a product description, target audience, and promotional plan.
Then comes the fun part: making them work. I set up a system where each model got the same initial prompt: “Create a marketing campaign for a self-cleaning coffee mug.” Simple enough. They each spit out their own versions of the campaign.
Here’s where it gets interesting. I didn’t just pick the best campaign outright. Instead, I did this weird thing where I sort of “voted” on the best parts. I took each element – the product description, the target audience, the promotional plan – and compared what each model came up with. Then, I picked the “winning” element from each model. So, maybe Model A had the best product description, Model B had the best target audience, and Model C had the best promotional plan.
Then, I stitched it all together. I took those “winning” elements and combined them into a single, Frankenstein’s monster of a marketing campaign.
Finally, time to compare. I showed both the “master” campaign (the Frankenstein one) and the individual campaigns to a group of people (friends, mostly). I asked them which campaign was better, more creative, more effective, the whole nine yards.
The results?
Well, surprisingly, the “master” campaign actually did pretty well. It wasn’t a clear knockout, but it consistently scored higher than the individual campaigns. People said it was more well-rounded, more creative, and generally more appealing.

Lessons learned?
- Turns out, having a bunch of specialized models working together can be pretty powerful.
- The “voting” system seems to work better than just picking one model outright.
- This is all still super experimental, but it’s a fun way to think about AI and collaboration.
So yeah, that’s the “who has the most masters wins” experiment. Messing with this was fun and got good results. Might try something similar later!