I’m very much not an expert on neural nets, despite playing around with them on occasion, so I asked someone who is. Alex J. Champandard very graciously answered my questions about your question. 

I’ve elaborated on his responses so any mistakes in explaining this are my own. With that in mind, let’s get started:

A neural network is trained on a set of training data. Ideally, you want it to be able to generalize from that set to the larger set that you actually want to classify. If you sample a trained neural network, such an an implementation of char-rnn or the Magic: the Gathering card generators, the stuff it creates should already be mostly unique within that sample.

When a sample repeats itself, that’s due to over-fitting. The neural network has learned the training data too well, and failed to generalize it. The most common reason for this is when the training data is too small–the neural network essentially memorizes the training data by rote. But avoiding overfitting is a frequent training problem. 

And if that wasn’t enough, the training can also fail in the other direction, as Alex explains: “Under-fitting means everything is averaged out / mushy / blurred.”

So a well-trained network, by definition, is much less likely to repeat itself. But what if you still want to make absolutely sure that it doesn’t?

For the purposes of your question we can consider the neural network to be a deterministic function: the same input into the trained neural network will produce the same output. (There are stochastic neural networks where the output is partially random, but for your question we can abstract that by including the random seed.)

To explicitly avoid prior solutions without changing the seed, you’d need to re-train the neural net based on its prior output. You’d end up with a new neural net that’s trained to avoid the things you told it not to repeat. (This is a little bit like adversarial training.)

That’s one way to avoid repeating your output, but it’s not necessarily the best way. When I asked Alex, he said that, "Most neural networks are deterministic so re-training makes sense,“ but that,

"NeuralStyle uses a random seed basically, and a separate optimization algorithm that gives you control outside of the network. There are algorithms in image synthesis that are all about reshuffling image elements, but trying to keep them all in the new image. NeuralStyle does this too in a way, it only can reshuffle things on the page.”

So, based on my current understanding, re-training the network might work fine. But if the original network is generalized enough, it’s probably less efficient than just using a different seed. As with most optimization problems, the best solution depends on exactly what you’re trying to accomplish.

I hope this clarifies your understanding of neural networks. I came across a few other resources while I was researching this, if you’d like to dig further: Awesome Recurrent Neural Networks is a curated list of RNN resources; the M:tG generation community has a tutorial for using the tools they’ve developed to generate new Magic cards; and The Unreasonable Effectiveness of Recurrent Neural Networks is a good introduction to working with recurrent neural networks.