IACS Research Day Faculty Talks: Matthew G. Reuter

>>So, good afternoon, everyone. Thanks for sticking around past lunch in research
day. We have nothing to motivate you anymore, there’s
no more food. Anyway, so my name is Nat [assumed spelling]
Reuter. I’m here in ICS. I’m also in the department of applied math
and statistics, but don’t let that fool you. I’m a physical chemist by training. And so, what I’m going to tell you a little
bit today, are some results ongoing in my research group about reconciling experiment
and computation in electron transport studies through molecules. And so, we can think about, sort of, just
broadly, since I’m a physical chemist working in an applied math department in a computation
institute, what exactly do I do? I respond to emails, I merge PDF documents,
and I invert big matrices. Okay. So basically research in my group, sort of,
goes in two directions depending on, sort of, what the starting point was. Where in some cases we use nanoscience in
a general sense. So, looking at really small things where quantum
mechanics becomes really important. And that usually can help motivate us with
some questions. Maybe they’re more chemistry-related, maybe
they were physics-related. But anyway, what we ended up doing, is sort
of, instead of saying, well, how can we develop some nice models for this type of stuff? And how can we then use these models, whether
they’re phenomenological, whether there may be a little bit more sophisticated, depends
on the question we’re asking. And how can we dig into the, now, sort of
the computational math of what’s going on there? Can we find some interesting questions to
ask on the mathematical side? Can we find what the physicist might call
is an exotic case? Can we find what the mathematicians would
call a pathological case. And somehow relate the two back together. And so sometimes nanoscience makes us look
and say, well, what computational math can we use? Turns out there’s some really wonderful things
in the dusty corners of linear algebra. If you’ve never heard of anti-eigenvalue analysis,
or pseudospectral, there’s some really cool shit out there, pardon my French. But anyway, sometimes we can use the computational
math, and come back and find interesting questions to ask in nanoscience. And so the two, sort of, particular areas
that we’ve been focusing on in recent years. Number one, going back to some of my PhD work
is an electron transport. So how do you electrons go through some sort
of quantum mechanical system. Again, inverting big matrices. Some of the stuff that my two students Chris
and Jonathan talked about this morning in the lightning talks on nanomaterials, sort
of, start to look at the same thing in this idea of how do we use modern linear algebra
tools to better understand and characterize material properties. And we’ve been spending a lot of time looking
at complex fan structure in the last couple of years. At the end of the day, where this all sort
of fits together, in sort of chemistry, physics, applied math, computer, computational science
is we’re trying to develop efficient and accurate algorithms. And in a lot of cases, this reaches out to
using some very modern linear algebra tools, whether developed for computation for pure
math, or in some cases because of some interesting physical and chemistry questions. Okay, that’s sort of the overview of the group. So the title said, we’re going to sort of
compare and try to reconcile experiment and computation. And so, when [inaudible] the system that we’re
looking at, we’ve got some sort of molecule. I apologize, I didn’t put a picture on the
slide. There’s a molecule, there’s big electrodes,
and we’re going to put some sort of bias across, and say, how much electric current flows through
the molecule. I will ruin the punchline for you here and
say that experimented computation do not agree. So most of you, you’re thinking we want to
do high-level computation and get accurate results. You’re hoping for, you know, quantitative
accuracy. Maybe you get two decimal places, right, or
three decimal places, right? In the electron transport community, if we
could get within three orders of magnitude, we’d have a beer, and go home, and say that’s
great. Okay, so there’s not a lot of agreement. And there’s a whole bunch of reasons in the
community that suggested for why this this agreement is so poor. They’re all plausible. We don’t need to get into them here. The big issue, or two of the big issues that
we [inaudible] that experiment actually measures one thing. The conductance, the actual property that
we’re interested in. Computation in certain limits can calculate
conductance, but in most limits is actually calculating something called the transmission. So basically, it’s saying, what’s the transmission
probability that electron will tunnel, or get from the one electrode through the molecule
over to the other electrode. And it turns out, those two things, they’re
not the same thing. There’s a reason one is called the conductance
and one is called the transmission. Okay, so, they can be related through various
theories. The simplest one is Landauer Boudicca Theory
where you get the conductance is equal to this constant of proportionality called the
quantum of conductance times the transmission evaluated at the Fermi energy. Okay, so they’re nice and they’re proportional,
that’s fine and good. Couple issues. So, it turns out, calculating the transmission
is hard. We don’t actually know what the Fermi energy
is, but we know what the charge of the electron is. We know what a Planck’s constant is. So hey, we know the constant of proportionality. Go, team. Okay, and so in this regard, this sort of
complicates doing a comparison because you’re sort of comparing, you know, apples and orangutans. All right. So one of the things and advances that we’ve
been doing in recent years, is trying to develop new ways using some of this math. And using some of these computational tools,
new ways to better compare computation and experiment, recognizing that one calculates
transmission, one measures conductance. And so, basically the way experiments are
done is everything statistical. We don’t need to go into the details of how
the experiments are done, but they’re extremely uncontrollable. They’re irreproducible. This sounds like good science, right? Okay. So the way they get around this is they start
doing the experiment, I don’t know, a thousand, ten thousand times. And they basically compile this to whole bunch
of Statistics. It turns out the statistics are reproducible. And so, what is it happens we get this sort
of thing called a conductance histogram. So we’ve got our conductance that’s measured
and essentially how many times we measured it. We get different peaks in it, and we can attribute
those peaks to say, well, there’s the molecular conductance. Okay, now a couple things to point out. Full width, half mass of this peak is about
three quarters of an order of magnitude. Okay, that’s the molecular conductance. Yeah, there’s some error bars here. Okay. So the way that these have traditionally been
read as you would pick on a table. There’s the mode of the P. We won’t call it
the mode, will call it the average. And we’ll say that’s our single molecule conductance. And just sort of throw out all of this other
data. Okay, what we’ve been doing instead of saying
well, can we somehow use all of these statistics. Number one, can we understand why these Peaks
have the shapes that they do. The answer is yes, so I have some references
on the left slide. But can we actually use the statistics to
actually predict from experiment what that transmission would look like with error bars. And the answer is, again, yes, we can. And so we did a couple of just simple test
systems here using the simulated data. And actually now here’s a real computation
that was done from some of my colleagues at Lawrence Berkeley National Lab and Molecular
Foundry, comparing against experiments done on the same molecule from one of the groups
in just down the road at Columbia. And what we’re able to find, [inaudible] in
the red error bar, this would be the conductance that the computation, or excuse me, that experiment
predicted with one standard deviation. And there’s the computation that was predicting. This is using the best computational methods
that have been developed for doing these types of studies, and when you go and look at their
paper that was published, probably three years ago on this, they’re very sad that there wasn’t,
you know they only worked right on top of each other. But the issue is the computation is within
two standard deviations, of what the extent we predict, so they’re sort of crying and
saying we weren’t really on the money. And I’m like, I don’t think you can say you
weren’t on the money. You’re within two standard deviations. Now you’re right at two standard deviations. But you’re right there. But what we’ve ended up doing is finding better
ways to do the comparisons, that we can actually try to put both the conductance and the transmission
on equal footing. Okay, so that’s sort of one area. Another one that we’re doing sort of coming
back towards the computation is, how do we do better computations? It’s possible there’s something wrong in our
computations that might contribute to these three, four, five order of magnitude difference
that we sometimes get between the computation and the experiment. And so, it turns out if you start digging
into the literature, there are some well-known numerical artifacts. In these transport calculations, they know
for instance that you typically don’t get basic set convergence. So you throw more computational resources
at your problem. And it basically turns into a random number
generator. Your number doesn’t converge to serve, keeps
flopping around like a fish. That’s usually not regarded as a good thing. There’s another called ghost transmission
a numerical artifact where you basically say take the molecule out of the system, but just
leave like a three nanometer gap. And look at this tumbling transmission probability,
which Physics 1 student could figure out that the probability of the electron getting through
that Gap is zero. It’s not, well, it’s like 10 to the minus
120, but we can all call that zero. Okay. And so, you can then go and actually employ
the common state of the art techniques on these systems. And actually find that the computation produces
a transmission probability of 1 and 10. That’s close to a 10 to the minus 120. Right. Okay. So what we’ve been doing in recent years,
this is very much ongoing work with Pannu (assumed spelling) who’s here in the audience,
is trying to develop and Implement different solutions for how do we, number 1, figure
out what are the causes of some of these problems.That’s been, well, Robert and I have hypothesized
on this a couple of years ago. It’s implementing, it’s been a bit of a bit
more fun. But working on the implementation now and
to demonstrate that not only can we understand where these numerical artifacts are coming
from, but that, if you’ll let me use the verb in a scientifically appropriate context, we
can exorcise ghost transmission. Okay. And so what we plotted here, some preliminary
results from a few years ago– we’re basically in the black line. We’ve got, sort of, what their traditional
codes are saying you would get to the transmissions. All of the jumpiness doesn’t matter. In the gray line is where we sort of use our
fix [assumed spelling]. And what you, sort of, see — sorry, other
way around, black lines the fix — is that, in general, when you put in this fix, things
sort of shift down a little bit. And we can start to account for why things
are a little bit higher than what they would want, what they would necessarily should be. So, anyway, I will end there, and just put
up the slide. Thank Pannu for his ongoing work, Steven Shudey
[assumed spelling] was a absolutely phenomenal undergrad here about two or three years ago,
who helped do the [inaudible] histogram [assumed spelling] work. And my colleagues and collaborators, Torsten
from University of Copenhagen, and Robert here. So I’d be happy to take, maybe, one quick
question if you have any. Thanks for your attention. [ Applause from the Audience ]>>[Inaudible] I ask a question. Don’t you think that the most important aspect
for the difference is that you observe the [inaudible] theory and experiment is the detailed
knowledge of the attachment of the molecule to the [inaudible].>>So that is certainly very much– so the
question was, for other people because you didn’t hear it — is the actual geometry of
the molecule [inaudible] support. Absolutely, it is. And so, the issue though, is if that were
the only factor, I would argue that half the time, experiment would be lower than theory,
and half the time it would be higher. To me, the fact that computation is always,
always two, three, four, five, seven, eight orders of magnitude bigger tells me, I think,
there’s something else that’s a little bit more systemic and pernicious going on. I mean, I can’t discount that fact. It’s certainly a plausible explanation, but
I don’t think it’s the only one. Right. Thank you.

Leave a Reply