– So as Peter mentioned,
I’m the Program Manager for the DHLab, and one of the things that I think a lot about
is how do we help students and faculty who are interested in digital humanities methods get started. Oftentimes computation is a
new dimension to their work. They might not have experience, oftentimes, they won’t have
experience in programming. They might or might not know what the file system is on their computer. And they all have different timeframes in which they’re working
in which they need to complete a project. Depending what they’re hoping to do, that poses more or less of a problem. If they’re interested in getting
started with text analysis there are a lot of really
out-of-the-box tools, no programming required to get started. Oftentimes as an entry into later sometimes more sophisticated methods. But if they’re interested
in image analysis, it tends to be a little
bit steeper of a curve. The approaches tend to
be really technically and mathematically sophisticated. They’re oftentimes working with thousands, or the algorithms are primed
for working with thousands or tens of thousands of
images as we’ve seen earlier, and students and faculty
who are coming to me usually aren’t working quite at that scale but they are interested
in these approaches. So how can they actually get started? And as Peter mentioned, one of the reasons this has been on our mind lately, especially in the image computation space, is we did have a student
come to us last semester. She’s a Masters student in sculpture, and she was interested in
working with this collection of photographs of paintings that she has. It was around 65 photographs of an artist that she was close with
and she wanted to know whether she could take those
photographs of these paintings and create new images
that would be in the style of those photographs
so that they would look like the artist had created
them even though he hadn’t. For her this was both an artistic project and it was something she
was thinking about as part of her program as well
as research project, but it was also very personal to her. This artist that she was working with was a very close
friend and is deceased, and so it’s sort of a fixed collection. She was really very motivated and invested in working on this project. And she had seen, some
of you might have seen, that Christie’s recently sold an artwork for almost half a million dollars that was produced with a Ganz so some of the things Peter
was showing us earlier. We really as a lab
should be producing Ganz I think for Christie’s. But she was really
excited by this approach and saw that it seemed like something that is kind of what she was hoping to do, but the problem is is Ganz do require you to have thousands and tens
of thousands of images and she had around 65,
so that wasn’t going to quite be an approach she could use. And she herself had no
programming experience so that was also going
to be a bit of a barrier. So we wanted to find something that could at least get her started in thinking along these computational lines that could be a first
pass for this project. And so neural style
transfer was the approach that we decided we would take. This is where you are
combining the content of one image with the
style of another image by way of convolutional neural networks. And we were going to use the code base that’s produced by Justin Johnson. There’s his Github link, and
I’ll post the link again later if you’re interested in it. And there were several
advantages, we thought, in using this approach. It works on a small scale so it can work with just two images though it can work with more than two as we’ll
see in just a few minutes. After the initial setup, it
only requires a few lines of code to run, and so this meant that after a session on the unit shelves this student would be able
to run this code on her own, she’d be able to change
some of the parameters to tweak the output so she
wouldn’t be dependent on us to run the algorithm and make changes, and our best guess is she
would actually be able to get her hands on the code and make some slight
adjustments here or there based on what she was
seeing as the output. And this also the ability to kind of tweak the parameters incrementally also meant that she could get a little bit of experience working with the code and trying to figure out
what it’s actually doing so that she could see a
little bit behind the scenes. One challenge that we had to be mindful of though was the filter effect. So as the name neural
style transfer suggests, what the algorithm is trying
to do is grab the content from the content image, take
the style from the style image, and generate a new output
image that has the content of one with the style of the other. But that’s not exactly what
our student wanted to do. She didn’t just want the
style of this painting, her artist’s paintings,
she also wanted content that felt new and generated as well. And so even though it’s a
neural style transfer algorithm, we were trying to think
is this still an approach that we could use to get something that feels really generative even on the content side as well. And so what we did is I’m gonna
show you a few experiments that I ran while we were just
trying to get our footing with what this algorithm was capable of, and then we taught her
how to do this as well so that she could experiment
with her own data set. But instead of showing her data set, I’m going to show some
Kandinsky paintings instead. So for the first experiment that we ran, we just did the defaults. So we took one Kandinsky painting, we took another Kandinsky
painting, and we produced a third. And as you can see, we
have, yes, we have the style of that second one, we have
the colors of that second one, but we still have all
the main core content from that first original image. And that’s because, again,
we haven’t changed any of the default parameters at this point. But what we could start
doing is you could have more than one style if you wanted. And so in this case we were taking now four Kandinsky paintings as our
style input, and by default, all of those style images
are equally weighted but you can adjust that which
is something we’ll get to. And so, what you’re seeing
is yes we’re starting to see a little bit more
variation now in terms of the final output colors starting to look a little different but that content is still
really the same content. And so the next step is the one that I think starts being promising for what the student was hoping to do which is that you can
also adjust the weights. So you can have the trade-off
between the relative weights of how much retain of that content image and how much you are taking
from the style images. Changing the weights will
result in higher loss for each of those epics, we
were just hearing about epics. But as a result of that, you start to see greater
change with each iteration. So you start to get something that feels a little more transformative. And so after that point,
after you’re mixing up the styles and the weights, really it’s just a matter
of keep tweaking it. Try different weight settings
and see which one starts to feel more in line with
what you’re hoping to see. I rather like, actually, the third one. I think that looks like
a nice Kandinsky painting even though it’s not exactly. But what else you could
do is you could also take from an earlier output in that
epic if you were interested. So this is going to cycle
through each 100 epics it gives you an output
image, and so, again, depending what the goal
is for the project, you could take from one
of those earlier stages and have that be an image
that you want to work with. Or the last thing that
I’ll show is you could take the output and use that
as your new content image. So now we have an output
which is already kind of a mixed up one has a lot
of things going on with it, and then we’re adding
three more style images to that and then result feels, again, that much more transformative now because we suddenly don’t
have a content image that already had a firm
content attached to it. And so, I’m gonna take that last one now and put it actually in
the context of a bunch of Kandinsky paintings and you can see that it really does start to
have the feel of Kandinsky without having the exact content copied from any single one of those. And so, this was something
that was really exciting to the student that we were working with. We were able to get her
set up on the computer in the cube actually is where
she did a lot of this work. So she had a key to the
cube, she was able to come in any time Sterling was
open, run these algorithms on her own with her own data set. She could try switching out paintings ’cause some images work better
as content images than others some work better as
style images than others. And she also is starting to think about how she might expand
her corpus even more to include things like
some of the photographs that the artist took that
inspired the paintings he then went on to do, and so mixing some of those actual source photographs with the actual photographs themselves to produce something new. But other ways that you
could tweak the parameters that I haven’t shown
are you could do things like rotating the style image
would have an affect on this. You could choose how much
of the color you’re taking from the content image or not. You could resize the style images. So the code is actually really flexible. It has a lot of different parameters that you might play around with and so for anyone who is interested in getting something up and running or for more of the technical backend of it and is really interested in
the fact that we ran this with Cuda and Torch and
on a GPU instead of a CPU, I recommend checking
out the code on Github. It’s all open. And then this is the paper
that inspired the code so for anyone who is interested
in exploring with that. Thank you. (audience applause)