Past Presidents

Past Activities

For more recent activities please visit our Activities

Upcoming Acitvities

Past Activities


Dr. Bob LaBarre(United Technologies Research Center)

Two Elementary Problems from an Elementary Viewpoint … Made Easy

Bob LaBarre has enjoyed a 37 year career as an industrial mathematician at United Technologies Research Center and is currently Principal Mathematician and Group Leader, System Dynamics and Optimization, responsible for 15 Ph.D. research scientists. In 2010 he was elected to the Connecticut Academy of Science and Engineering. Dr. LaBarre received his Bachelor of Science (University Scholar) and Master of Science degrees in mathematics from the University of Connecticut in 1976 and 1978, respectively. In 1987 he began working part-time on a Ph.D., which he completed in 1992, also in mathematics from the University of Connecticut. As an industrial mathematician he has made numerous original contributions supporting the businesses of United Technologies Corporation, many used in our everyday lives. From cryptographic methods in automotive key fobs and keyless door locks, to a computationally efficient gradient-free stochastic optimization methodology. His work in algebraic graph theory, recognized by the UTC Senior Vice President’s Award, provides an understanding of uncertainty propagation through complex systems, and his work in stochastic analysis led to the generation of bounds on elevator dispatching times, recognized by a UTRC Outstanding Achievement Award. He has authored or co-authored over 40 technical papers and been awarded 7 patents by the USPTO. Bob has mentored two generations of industrial mathematicians at UTRC; taught graduate and undergraduate courses as an adjunct faculty member over a twenty year span at RPI – Hartford, UConn, and Worcester Polytechnic Institute; participated on three Ph.D. advisory committees (2 completed); and been an active member of the Mathematical Sciences Advisory Board at WPI. He has participated in summer NSF sponsored programs helping high school mathematics teachers understand mathematics outside the educational framework, and he has interacted with some of the best and brightest high school students around the world via the Intel International Science and Engineering Fair as a judge for the UTC student awards program.


Xuping Xie(Dept. of Mathematics)

Approximate Deconvolution Reduced Order Modeling

This presentation proposes a large eddy simulation reduced order model (LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition (POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution (AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient ($\nu=10^{-3}$)

Dr. Axel Modave(Dept. of Mathematics)

GPU-Accelerated Nodal Discontinuous Galerkin Method for Time-Domain Wave Propagation

Finite element schemes based on discontinuous Galerkin (DG) methods exhibit interesting features for massively parallel computing and GPU computing. Nowadays, they represent a credible alternative for large-scale industrial applications. Computational performance of such schemes however strongly depends on their implementations. In this talk, key aspects of GPU computing will be introduced and different nodal DG implementations will be compared using up-to-date performance results.


Igor Pontes Duff Pereira (ONERA)

$\mathcal{H}_2$ model approximation, interpolation and time-delay systems

Some recent developments on the $\mathcal{H}_2$ model reduction problem and Loewner framework are presented for the case of Time-Delay Systems (TDS). Firstly, we introduce the Loewner framework, which enables us to perform rational interpolation with a space state structure. Secondly, the relation between $\mathcal{H}_2$ /$\mathcal{L}_2$ model reduction algorithms and TDS stability charts will be highlighted by some numerical simulations. Finally, the $\mathcal{H}_2$ model reduction problem will be addressed in the case the approximation has a single state delay.

Alessandro Castagnotto (TUM)

Salami slicing $\mathcal{H}_2$-pseudo-optimal model reduction [slides]

Model order reduction for liner-time invariant systems has been studied extensively for about 50 years now. For so called truly large-scale systems, i.e. of order higher than $10^4$, Krylov subspace methods (aka interpolatory methods) represent one of the few viable approaches, since they require only the solution of sparse shifted linear systems of equations. That said, the selection of appropriate reduction parameters such as reduced order and interpolation points (shifts) is crucial to the quality of the results. In this contribution, we present a framework due to Panzer, Wolf and Lohmann that allows for the adaptive selection of both the reduced order and optimal shifts. Our primary tool will be a particular Sylvester equation, an equivalent parametrization of the Krylov subspace basis. Using this tool, we will derive a new error formulation and, based on this, a cumulative reduction framework (CURE) that iteratively adds information to the reduced model. The search for $\mathcal{H}_2$ optimal shifts will be conducted by pseudo-optimal reduction instead of Hermite interpolation as in IRKA. We will discuss how this approach has a series of nice properties.


Hadi El-Amine (ISE)

Robust Post-donation Blood Screening under Prevalence Rate Uncertainty

Blood products are essential components of any healthcare system, and their safety, in terms of being free of transfusion-transmittable infections, is crucial. While the Food and Drug Administration (FDA) in the United States requires all blood donations to be tested for certain infections, it does not dictate which particular tests should be used by blood collection centers. Multiple FDA-licensed blood screening tests are available for each infection, and all screening tests are imperfectly reliable and have different costs. In addition, infection prevalence rates within the donor population are uncertain not only for emerging infections, but also for established infections. In this setting, the budget-constrained blood collection center's objective is to devise a "robust" post-donation blood screening scheme that minimizes the risk of an infectious donation being released into the blood supply. Toward this goal, we study various objectives, including the minimization of mean-variance and regret-based objectives associated with the transfusion-transmittable infection risk, and characterize structural properties of their optimal solutions. These characterizations allow us to gain insight and develop efficient, exact algorithms. Our research shows that using the proposed optimization-based approach provides robust solutions that have significantly lower expected infection risk compared to various FDA-compliant testing schemes. Our findings have important public policy implications.

Klajdi Sinani (Dept. of Mathematics)

Model Reduction for Unstable Systems [slides]

Generally, large-scale dynamical systems pose tremendous computational difficulties when applied in numerical simulations. In order to overcome these challenges we use several model reduction techniques. For stable linear models these techniques work very well and provide good approximations for the full model. However, some of these methods are not so robust, or in some cases, may not even work if we are dealing with unstable systems or delay systems. We explore these problems and techniques for unstable and delay systems.


Hrayer Aprahamian (ISE)

Residual Risk and Waste in Donated Blood with Pooled Nucleic Acid Testing [slides]

An accurate estimation of the residual risk of transfusion-transmittable infections (TTIs), which include the human immunodeficiency virus (HIV), hepatitis B and C viruses (HBV, HCV), West Nile virus (WNV), among others, is essential, as it provides the basis for blood screening assay selection. While the highly sensitive nucleic acid testing (NAT) technology has recently become available, it is highly costly. As a result, in most countries, including the United States, the current practice for HIV, HBV, HCV, and WNV testing in donated blood is to use pooled NAT. Pooling substantially reduces the number of tests required, especially for TTIs with low prevalence rates. However, pooling also reduces the test's sensitivity, because the viral load of an infected sample might be diluted by the other samples in the pool to the point that it is not detectable by the NAT, leading to potential TTIs. Infection-free blood may also be falsely discarded, resulting in wasted blood. We derive analytical expressions for the residual risk, expected number of tests needed, and expected amount of blood wasted for various two-stage pooled testing schemes, including Dorfman-type and array-based testing, considering the dilution effect, imperfect tests, infectivity of the blood unit, and stochasticity in viral load growth. We then calibrate our model using clinical data and perform a case study. Our study offers key insights on the trade-offs incurred for different testing schemes.

Daniel Friedman (Dept. of Mathematics)

Stacker-Crane Problem: Linear-Time Near-Asymptotically Optimal Solution [slides]

Stochastic Stacker-Crane Problem is a generalization of the Euclidean Traveling Salesman Problem on a probability distribution. It has a large number of applications, from Uber and Amazon delivery, to ordering robotic tasks. We will present a previous approximation algorithm and our current algorithm which will show the wide set of skills useful for computational geometry. This will include the central limit theorem, random permutations, spatial partitioning trees and expected lengths.


Rafael Marques (ICAM)

Henstock-Kurzweil Integration: Applications and Numerical Methods [slides]

We introduce the Henstock-Kurzweil integral and present some applications: Generalized Ordinary Differential Equations and numerical methods. Generalized ODEs have been proved to be a powerful tool for studying differential systems with discontinuous solutions, such as Measure Differential Equations, Functional Differential Equations with impulses among others. We also present a Generalized ODE with applications to Integral Equations


Samantha Erwin (Dept. of Mathematics)

Atomic Bombs and Antibodies

Contact speaker directly at for a copy of abstract or slides.

Drayton Munster (Dept. of Mathematics)

"OpenMDAO or: How I Learned to Stop Worrying and Love the Chain Rule" [slides]

Description: As engineering projects become increasingly complex, it is often necessary to incorporate models from many different disciplines. Finding optimal designs in a feasible amount of time often requires gradient-based optimization routines. However, as more (sub)models are added, it becomes increasingly difficult to analytically state the derivatives for the entire system subject to the complex interdependecies and constraints. If the models are computationally expensive and the design space large, then finite difference approximations may be intractable as well. Multi-disciplinary optimization frameworks, such as OpenMDAO, allow users to construct complex models out of many disciplines and are responsible for evaluating the derivatives of the model. In this talk, I'll be discussing the basis for the OpenMDAO framework as well as my experiences at NASA Glenn Research Center.


Alexandra Hyler (Biomedical Engineering)

Investigating the Dynamic Biophysical Environment of Ovarian Cancer [slides]

Ovarian cancer has one of the worst incident-to-death ratios due to a lack of detection before the cancer spreads. Therefore, there is much interest in the effects of fluid flow (mainly due to bowel movements) on ovarian cancer and the resulting mechanical microenvironment in the abdominal cavity. Could this dynamic environment contribute to the metastasis of one of women’s leading killers? In particular, at what threshold does this shear force become significant in disease progression? Therefore, we tested the impact of very low fluid shear stress forces on three stages (from non-malignant to highly aggressive) mouse epithelial ovarian cancer cells to observe the effects of fluid flow on cancer cells. Based upon these results, we have started building a mathematical model of our current testing system, a future testing platform and the physiological reality. We then analyze and begin hypothesizing what biological implications and effects fluid flow has on the progression and aggressiveness of ovarian cancer.

Justin Krueger (Dept. of Mathematics)

A Robust Parameter Estimation Method for ODE Models [slides]

As mathematical models continue to grow in size and complexity, the efficiency of the numerical methods used to solve their corresponding inverse problems becomes increasingly important. With differential equation models, for example, avoiding the computation of the forward solution is desirable. In this work, we construct a “nearby” inverse problem that avoids this computation to create a numerically robust approach while also providing parameter estimates suitable for the solution of the original inverse problem.


Dr. Gilbert Strang (MIT)

Tridiagonal Matrices and Applied Linear Algebra

My favorite matrix has three nonzero diagonals with $-1$'s and 2's and -1's. It is like a second derivative (very useful). This talk is about a much larger family of tridiagonal matrices in applied mathematics:

  1. How to replace the zeros to reach a full matrix (optimally)
  2. How to invert that full matrix
  3. How to find the eigenvalues
  4. How tridiagonal matrices and their inverses come up in applications.
The ideas extend to other banded matrices but the presentation will turn to education (linear algebra, differential equations, and computational science). Do videos succeed? Does online homework succeed (for ideas and not just practice in repetition)? Do applications help or interfere? Hard questions.


Thomas Battista (Dept. of Aerospace and Ocean Engineering)

Underwater Vehicle Depth and Attitude Regulation in Plane Progressive Waves [slides]

Hydrodynamic forces from unsteady and nonuniform flow fields are difficult to accurately capture in control-oriented dynamic models of undersea vehicles. In this talk, I present a dynamic model derived from Hamilton's principle for an underwater vehicle subject to plane progressive waves. The hydrodynamic forces predicted by the model are compared with an analytical solution based on potential flow theory. Having demonstrated that the proposed model captures wave forcing effects with sufficient accuracy, a proportional-derivative control law and a nonlinear control law are implemented to stabilize an underactuated underwater vehicle about a desired depth and attitude in plane progressive waves.

David Allen (Dept. of Aerospace and Ocean Engineering)

Input Waveform Optimization for Vibrationally Controlled Systems [slides]

My presentation addresses optimization of the input amplitudes for mechanical, control-affine systems subject to high-frequency, high-amplitude periodic forcing. The problem is to determine the waveform (shape), relative phase, and amplitude for each zero-mean input such that the input amplitudes are minimized. In both of the two approaches that I present, I appeal to the averaging theorem to formulate a constrained optimization problem. In the first approach, the constraints are algebraic nonlinear equalities in terms of the input amplitudes and their relative phase. The constrained optimization problem may be solved using analytical or numerical methods. In the second approach, the inputs are represented as truncated Fourier series whose coefficients are determined through numerical optimization. The two methods are applied to a two-input, three degree of freedom system: a horizontal, two-link mechanism appended to a cart.


Breakfast with Dr. Clive Moler (Co-founder of MathWorks, past SIAM president)

Cleve Moler is chief mathematician, chairman, and cofounder of MathWorks. Moler was a professor of math and computer science for almost 20 years at the University of Michigan, Stanford University, and the University of New Mexico.
He spent five years with two computer hardware manufacturers, the Intel Hypercube organization and Ardent Computer, before joining MathWorks full-time in 1989.
In addition to being the author of the first version of MATLAB, Moler is one of the authors of the LINPACK and EISPACK scientific subroutine libraries. He is coauthor of three traditional textbooks on numerical methods and author of two online books, Numerical Computing with MATLAB and Experiments with MATLAB.


Souvick Chatterjee (ESM Department)

Mathematical modeling of mosquito drinking [slides]

Mosquitoes drink using a pair of in-line pumps in the head that draw liquid food through the long channel known as proboscis. Experimental observations with synchrotron x-ray imaging indicate two modes of drinking: a predominantly occurring continuous mode, in which the two pumps expand cyclically at a constant phase difference, and an occasional, isolated burst mode, in which expansion of the downstream pump is 10 to 30 times larger than in the continuous mode. We have used a reduced order model of the fluid mechanics to hypothesize an explanation of this variation in drinking behavior. The unknown geometry of this expanded pump is modeled based on known constraints and parameters and its effect on drinking efficiency will be discussed. Our model results show that the continuous mode is more energetically efficient, whereas the burst mode creates a large pressure drop, which could potentially be used to clear blockages. Comparisons with pump knock-out configurations demonstrate different functional roles of the pumps in mosquito feeding. We will show mathematically conditions when a bubble can get clogged inside their feeding system and how the insect can use this burst mode to prevent itself from choking to death.

Selin Sariaydin (Dept. of Mathematics)

Stochastic Approach for Nonlinear Inversion [slides]

Nonlinear parametric inverse problems present huge computational challenges since we need to solve a sequence of large-scale discretized, parametrized, partial differential equations (PDEs) in the forward model. We combine simultaneous random and deterministic sources and detectors to reduce the cost. This is joint work with Eric de Sturler and Misha Kilmer.


2015 Fifth Annual SIAM Mid-Atlantic Regional Mathematical Student Conference and Industrial Days

The George Mason University SIAM Chapter, the Shippensburg University SIAM Chapter, and the Virginia Tech SIAM Chapter organized the Fifth Annual SIAM Mid-Atlantic Regional Mathematical Student Conference and Industrial Days, which was held on the campus of George Mason University. On Friday, the conference started with an "Industry Day" during which speakers from various companies and government institutes discussed how mathematics is used in their fields. The day concluded with an industry panel. The industry participants were: Scott Cochran from Fannie Mae, Pamela Williams from LMI Government Consulting, Ryan Smith from DigitalGlobe, Jason Dalton from Azimuth1, and Tony Kearsley from the National Institute of Standards and Technology (NIST). On Saturday, students presented research talks, and the conference concluded with keynote speaker, Carlos Castillo-Chavez, who gave a very interesting talk in mathematical biology.


Dr. Anna Ritz (Department of Computer Science)

Signaling Hypergraphs [slides - part 1] [slides - part 2]

Cells communicate with each other to perform their functions within the body. When a cell receives an external signal from the environment, it responds with a series of molecular reactions that alters the cell's behavior, e.g., causing it to divide, move, or self-destruct. These reactions constitute networks called "signaling pathways" whose alterations can cause diseases such as cancers. Directed graphs are the most common representation of signaling pathways, making them amenable to a wide array of graph-theoretic algorithms. However, directed graphs often inaccurately represent the underlying biology of signaling reactions. In this talk, I describe an alternative mathematical representation called the "signaling hypergraph." First, I illustrate how signaling hypergraphs overcome many limitations of graph-based representations. Second, I path problem. Finally, I apply the algorithm to a well-known signaling pathway, and describe how the shortest hyperpaths better represent signaling reactions than the corresponding shortest paths in graphs. Signaling hypergraphs exemplify how careful attention to the underlying biology can drive developments in a largely unexplored field of computer science.


Tour of the VT Visionarium

A small group of SIAM members went on a tour of the VT visionarium. Here a short summary from the visionarium website:
We focus on adoption of supercomputing and visual analysis tools to advance science, engineering, and education. Through our educational and support services, we boost access to and adoption of cutting-edge tools that integrate with researchers’ data, questions, and workflows. Our staff mission is to:

02/12/15 - Fluid Dynamics

Leila N. Azadani (Laboratory for Fluid Dynamics)

Large Eddy Simulation of Atmospheric Circulation Models in Spectral Space [slides]

Numerical simulations of atmospheric circulation models are limited by their finite spatial resolution, and so large eddy simulation (LES) is a preferred approach to study these models. In LES a low-pass filter is applied to the flow field to separate the large and small scale motions. In implicitly filtered LES the computational mesh and discretization schemes are considered to be the low-pass filter while in the explicitly filtered LES approach the filtering procedure is separated from the grid and discretization operators and allows for better control of the numerical errors. In this talk I will present the results of implicitly filtered and explicitly filtered LES of a barotropic atmosphere circulation model on the sphere in spectral space and compare them with the results obtained from direct numerical simulation (DNS). Different numerical experiments will be presented to show the better performance of explicit filtering over implicit filtering.

Xuan Cai (Karlsruhe Institute of Technology, Germany)

Computational Fluid Dynamics (CFD) simulation of wetting phenomena [slides]

Liquid wetting on solid surface is a crucial process in many industrial applications, such as coating, lubrication and multiphase chemical reactors. It is vital to accurately model the motion of the three-phase contact line, where conventional sharp interface models suffer from a paradox between moving contact line and no-slip boundary condition at the solid wall. The Phase field method (PFM) based on diffuse interface model treats the interface as a region of finite thickness and is promising to resolve this paradox. In this talk, I start with a brief review on PFM, and introduce how we implemented the method in OpenFOAM (an open source finite-volume-based CFD software). Then I present the test cases of increasing complexity, from elementary wetting process to industry-oriented applications, to validate the implemented model and demonstrate its capability of simulating complex two-phase flows on solid surface. Finally I talk about the cooperation with Prof. Pengtao Yue, which is the purpose of my 3-month research visit to Math VT.


Felix Schwenninger (University of Twente, The Netherlands)

Recent results about functional calculus for Tadmor-Ritt operators

An operator $T$ on a Banach space $X$ is called Tadmor-Ritt if its spectrum is contained in the closed unit disc of the complex plane and the resolvent satisfies $$\|(zI-T)^{-1}\| \leq \frac{C(T)}{|z-1|},\quad \text{for } |z|>1,$$ for some $C(T)>0$. This notion was introduced in the 80ies to study stability of numerical schemes. The question whether $T$ is power-bounded, i.e. $\sup_{n\in\mathbb{N}}\|T^{n}\|<\infty$ was answered positively by Nagy and Zemanek in 1999. This result can be seen as a consequence of a more general result about boundedness of a functional calculus for $T$. We will give an introduction to such a calculus and discuss recent results concerning its bound.

Sevak Tahmasian (ESM Department)

Vibrational control and averaging of underactuated mechanical systems [slides]

Underactuated systems have fewer actuators than their degrees of freedom. Examples are airplanes, satellites, mobile robots, fish, and flying insects and birds. Underactuated systems often cannot be stabilized using smooth, static state feedback. Control using oscillatory inputs is an applicable method for control of classes of underactuated systems. Vibrational control is the use of high-frequency, high-amplitude oscillatory inputs for control of mechanical systems. A useful tool for analyzing vibrational control systems is averaging theory which transforms time-periodic systems into time-invariant ones. In this talk averaging of mechanical systems with high-frequency, high-amplitude inputs and a closed-loop vibrational control method for control of underactuated mechanical systems will be discussed.

11/12/14 - Math Education

Morgan Dominy (Dept. of Mathematics)

Classical, Frequentist, and Subjective Approaches to Probability in the Classroom [slides]

There are currently three widely agreed upon approaches to calculating or modeling the probability of an event: Classical, frequentist, and subjective. Currently, class-time in probability and statistics courses heavily favors the classical approach. However, hands-on and computer simulations can be leveraged to build on students' existing intuition by taking a more frequentist approach. Additionally, students can be encouraged to use the subjective approach to formulate preliminary models before thinking in classical or frequentist terms.

Nathan Phillips (Dept. of Mathematics)

The Growth of Additive Reasoning with Second Derivatives [slides]

Reasoning about differences between two quantities is difficult and becomes increasingly important in middle grades students' work with integers and algebra. Utilizing a constructivist teaching experiment methodology, we worked with two sixth-grade students over the course of eight teaching sessions on complex additive situations in which students operated on differences of pairs of quantities to construct a second difference. We describe important changes in the students' ability to construct and reflect on the quantities involved in these situations. We hypothesize that purposeful selection of the context and variation of the number and type of missing quantities promoted student learning.

10/29/14 - Robotics and Optimal Control

Hunter McClelland (Dept. of Mechanical Engineering)

Optimal Walking Designs for a Tall Tripedal Robot [slides]

Robotic walking is a particularly difficult trajectory design and control problem. To address this type of problem, a design method based on solving Optimal Control Problems (OCPs) will be presented. For each OCP, a numerical method based on pseduospectral methods and direct collocation is used. The talk will be framed with a particularly fun application: a 3-legged "spider robot" designed to be very tall, perhaps 10[m]. Since nature provides few walking animals on that scale (the giraffe comes to mind) and none of them walk on 3 legs with the types of gait imagined, the method can only be compared with human designer intuition. The talk will also feature some fun videos of previous hardware designs, including humorous but unfortunate failures.

Artur Wolek (Dept. of Aerospace Engineering)

Optimal path planning for underwater gliders [slides]

Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal waters. The coastal ocean is a dynamic and uncertain environment with significant currents that can challenge the mobility of these efficient (but slow) vehicles. To address this challenge, a series of path planning approaches are developed that improve robustness in currents, energy efficiency, or minimize time of flight for maneuvers on the length scale of a few turn radii of the vehicle. In this talk, the latter time-optimal control problem is discussed in detail. Nonlinear optimal control theory (the Minimum Principle, and its geometric interpretation via the Hodograph) is used to derive a finite set of candidate optimal controls. The problem is then translated into a series of finite dimensional constrained optimization problems that can be readily solved to give locally optimal solutions. From these solutions, the lowest cost (and possibly globally optimal) control is then identified. Lastly, the design and field testing of an underwater glider platform to validate these motion planning approaches is presented.

10/15/14 - Solving PDEs

Thomas May (Dept. of Mathematics)

Using Walsh Functions to Solve Nonlinear PDEs with Shocks [slides]

One of major challenges in modeling hypersonic flows is the shocks that develop in the solutions. Traditional methods of numerically solving PDEs have trouble handling shocks. Using the orthonormal basis set of Walsh functions, we derive a global series solutions to nonlinear PDEs which include shocks. In this talk, we will explore several properties of the Walsh functions which makes them ideal for this problem. We will also explore this technique by numerically solving Burger's equation where the initial and boundary conditions generates a shock. With more work, this technique will be useful in modeling reentry of space craft in the atmosphere.

David Wells (Dept. of Mathematics)

Design and Implementation of Pseudospectral Methods [slides]

Pseudospectral methods are an approximation technique for solving partial differential equations (PDEs). A common use of these methods is direct numerical simulations (DNSs) of flows on simple geometries. It is similar in spirit to the finite element method, but involves globally defined basis functions, usually chosen as trigonometric, Chebyshev, or Legendre polynomials. This talk will cover some of the foundational theory of pseudospectral methods, such as proving optimal rates of convergence, as well as numerical examples taken from geophysics.

10/01/14 - Mathematical Biology

Ryan Nikin-Beers (Dept. of Mathematics)

Mathematical Modeling of Dengue Viral Infection [slides]

In recent years, dengue viral infection has become one of the most widely-spread mosquito-borne diseases in the world. Due to the nature of the immune response to each of the four serotypes of dengue virus, secondary infections of dengue put patients at higher risk for more severe infection as opposed to primary infections. The current hypothesis for this phenomenon is antibody-dependent enhancement, where strain-specific antibodies from the primary infection enhance infection by a heterologous serotype. To determine the mechanisms responsible for the increase in disease severity, we develop mathematical models of within-host virus-cell interaction along with epidemiological models of virus transmission. We model the effects of antibody responses against primary and secondary virus strains. We find that secondary infections lead to a reduction of virus removal, which is slightly different than the current antibody-dependent enhancement hypothesis. We use the results from the within-host model in an epidemiological multi-scale model.

Samantha Erwin (Dept. of Mathematics)

A mathematical study of germinal center formation [slides]

In this study, we investigate the host-pathogen dynamics leading to successful antibody responses capable of clearing an infection and make predictions on how these mechanisms are altered during chronic infections with viruses such as the human immunodeficiency virus (HIV) and hepatitis B virus (HBV). We develop mathematical models of (antibody producing) B cells that incorporate the interactions between antigen, CD4 T cells, T follicular helper cells and B cells of various levels of specificity to the antigen. We determine the characteristics of B cell and T follicular helper cell interactions during non-chronic infections by fitting the model to human germinal center B cells data. We then study how this prediction changes during chronic HIV infection.


Tiger Wang (Dept. of Mathematics)

Averaging analysis on the PEC model under oscillatory shear forces [slides]

It has recently been shown that the PEC (partially extending strand convection) model of Larson is able to describe thixotropic yield stress behavior in the limit where the relaxation time is large. We will discuss the behavior of the model under an imposed periodic shear force $\tau(t)=A+B\sin(\omega t-\Phi)$. We identify regimes of fast and slow dynamics and discuss two cases of oscillatory stresses: Fast dynamics for large $\omega$ and slow dynamics for $\omega=1$. The method of averaging turns out to be a crucial mathematical tool for the analysis.

Kihyo Moon (Dept. of Mathematics)

Immersed Discontinuous Galerkin Methods for the Acoustic Interface Problem [slides]

In this talk we will discuss higher order immersed discontinuous Galerkin finite element methods for the acoustic interface problem. One and two dimensional acoustic wave propagation in inhomogeneous media will be covered. In order to apply the Discontinuous Galerkin method to acoustic problem, we partition the domain into subdomains and use polynomials on non interface elements containing one material and specially designed piecewise polynomial shape functions on interface elements containing more than one material. In order to make interface shape functions, physical interface conditions are used. Additionally, extended conditions and interior conditions are used for two dimensional interface shape functions. We use standard discontinuous Galerkin finite element method on non interface elements and immersed discontinuous Galerkin nite element method on interface elements for solving the acoustic problem. Computational examples will be given for one and two dimensional cases.


Semester Kickoff Social (Potluck)

Come out to the Duck Pond to enjoy some time with your fellow graduate students. Learn about our SIAM student chapter and the activities and chances we offer. Please make this potluck a great event by sharing one of your favorite dishes.


Dave Wells and Claus Kadelka (Dept. of Mathematics)

Python and Mathematics [slides]

Python is an easy programming language used by companies like DropBox, Google, Facebook and Reddit for both quick prototyping and long-term development. Large libraries like numpy and Sage add powerful mathematics capabilities (from algebraic geometry and group theory to Krylov methods and ODE solvers) to Python programs. We will give a brief overview of the language and describe some of the interesting things mathematicians are doing with it. One main focus will be on graphing with Python’s matplotlib library.


Shibabrat Naik (ESM Department)

Probabilistic Methods in Phase Space Transport and Chaotic Mixing [slides]

We will present some of the recent efforts in understanding dynamical systems using eigenmodes of a discretized tranfer operator. We will use this transfer operator to define a heuristic for almost invariant sets. Froyland and co-workers have recently shown that the infinitesimal generator for this operator is a more convenient framework as it leads to a natural extension to the stochastic differential equation resulting from an additive white noise to the abstract Cauchy problem. We will look into some of the illustrative examples of autonomous flow in 2D, non-autonomous flow in 3D and logistic maps.

Dave Wells (Dept. of Mathematics)

Version Control and You [slides]

The use of version control software, also known as revision control software, has been a major advancement in software engineering over the last twenty years. Version control helps authors and programmers solve many problems with things stored as plain text (like LaTeX documents and computer program source code), including:

  1. How can I share my files with multiple coworkers?
  2. How can I examine and merge changes to documents?
  3. How can I keep track of changes over time?
  4. How can I handle multiple, related copies of a single project?
  5. How can I backup my work?
In this presentation we will give an overview of some examples of revision control systems. In particular, we will talk about git, mercurial, and subversion. We will also discuss workflows that are relevant to mathematicians and scientists, that, when properly implemented, will automate some of the more tedious parts of document preparation.


Alan Lattimer (Dept. of Mathematics)

Approximation of Linearized Bousinnesq Equations [slides]

Controlling large-scale non-linear dynamical systems resulting from partial differential equation models is prohibitive and thus requires simplifica- tion, for example model reduction. Unfortunately, for these systems, current model reduction techniques are either computationally intractable or are not guaranteed to preserve the controllability and stability properties in the re- duced models. We seek to create a new technique by reducing the linear and non-linear portions of the dynamical system separately in order to preserve the stability and controllability properties in the reduced model. In our current work, we model a natural circulation cavity using the linearized Boussinesq equations. The system is then reduced using Iterative Rational Krylov Algo- rithm (IRKA). We present results on the control of the reduced linear system produced using IRKA compared to the control of the full linear system.

Arafat Khan (ESM Department)

Progressive Failure Analysis of Laminated Composite Structures [slides]

Use of laminated composite structures play a very significant role in modern day aircraft industry. A proper study of the laminated composite structures would require a complete failure analysis in order to understand the design parameters regarding the use of composite materials for aerospace application. The characteristic of composite material helps to design vehicles with higher fuel efficiency and long lasting aircraft. The use of composite material has reached beyond the border of aircraft design We first review a progressive failure criteria for three-dimensional deformations of unidirectional fiber-reinforced composites, and then simplify these for plane stress deformations. These criteria have been implemented as an USDFLD subroutine in the commercial software, Abaqus/Standard. The implementation involves solving system of non-linear equations for modeling progression of failure in the structure. Several example problems have been analyzed with the developed software, and for some problems computed results have been compared with those obtained by using the CLPT for stress analysis and Puck and Schurmann's criteria for failure analysis. A close agreement between the two set of results verifies the implementation of the failure criteria in the software, Abaqus. These examples illustrate that the ultimate failure load is significantly higher than that when the failure initiates first which is usually called the first ply failure load.


Dr. Nick Trefethen (Oxford University, former SIAM president)

Numerical Computing with Chebfun [Chebfun] [Handout]

The SIAM Student Chapter at Virginia Tech was excited to welcome Dr. Nick Trefethen, former SIAM president, to our campus in Blacksburg, VA. Dr. Trefethen was a guest of the College of Science’s Academy of Integrated Science. He gave the initial lecture in the distinguished lecture series associated with the Computational Modeling and Data Analytics initiative. Dr Trefethen also kindly agreed to give a special talk for the Virginia Tech SIAM Student Chapter, which was attended by both graduate and undergraduate students. Dr. Trefethen talked about Chebfun, an extensive Matlab computing toolbox based on Chebyshev approximations of continuous functions. During the presentation, we were able to interactively try out various features of the toolbox on our laptops, which was very insightful and fun. Overall, this event successfully broadened our chapter’s visibility on campus and was a nice add-on to our regular biweekly speaker series.


Dr. Mark Embree (Dept. of Mathematics)

The Life Cycle of an Eigenvalue Problem [slides]

We solve eigenvalue problems to understand physical systems, such as the resonant modes of a structure, or band-gaps in a semiconductor, or the instabilities of flowing fluid. The transition from the motivating physical problem to numerically computed eigenvalues usually requires a series of approximations:

Through a series of concrete examples (including measured eigenvalues of a damped vibrating string; linearization of quadratic eigenvalue problems; spectral computations for quasicrystals; nonnormal convection-diffusion operators; accurate eigenvalues for beams) we shall demonstrate the steps in this process, and argue that a broad appreciation for this "life cycle" can inform better strategies for computing accurate eigenvalues in applications.


SIAM Poster Session within Research Days

All graduate students that created a poster presented their work to the Math Department and anyone interested from other departments.


SIAM Poster Session within Visitor's Day

All graduate students that created a poster presented their work to possible new graduate students who visit Virginia Tech within the Math Department's Visitor's Days.


Claus Kadelka (Virginia Bioinformatics Institute & Dept. of Mathematics)

Concise Functional Enrichment of Ranked Gene Lists [slides]

Advances in molecular biology have led to the broad availability of genome-wide expression data and the development of gene interaction databases like the Gene Ontology. Standard techniques used in genome-wide expression profiling (like RNA-Seq) output long lists of genes, ranked by expression levels. Gene set enrichment methods that simultaneously analyze the expression and the interaction of all genes have been developed to reveal overall trends, previously hidden in the data. Most gene set enrichment methods only work in a binary mode; that is, either a gene is considered differentially expressed or not. This view disregards the various levels of differential expression, which could be used to obtain a better explanation of the data. Other common shortcomings include high redundancy and non-specificity of the explanatory set, as well as the sole use of the hypergeometric p-value, which favors terms that annotate many genes. To our knowledge, all gene set enrichment methods exhibit at least one of these deficiencies. We present a new enrichment algorithm that neither disregards important information by operating in binary mode, nor shares any of the common shortcomings of gene enrichment methods.

Sook Ha (Virginia Bioinformatics Institute)

PlantSimLab - An intuitive web application for plant biologists [slides]

It is increasingly recognized that at the molecular level nonlinear networks of heterogeneous molecules control many plant processes, so that systems biology provides a valuable approach in this field, building on the integration of experimental biology with mathematical modeling. However, many plant biologists do not possess the mathematical background needed to build and manipulate mathematical models well enough to use them as tools for hypothesis generation. The fundamental hypothesis underlying the PlantSimLab project is that, with the right type of models and the right interface, biologists can construct, analyze, and use mathematical models for hypothesis generation, without direct assistance from modelers. To assure that the software meets user needs, our design methodology relies on a collection of case studies that take advantage of ongoing plant biology research projects. The biological focus of the project and the resulting software tool is on host-pathogen interactions and the immune response. Several different model types are being used for modeling gene regulatory, signaling, and metabolic networks. Two very popular types are ordinary differential equations (ODE) and discrete models, such as Boolean networks and their generalizations. Discrete models are more qualitative but have nonetheless proven to be very useful as tools for hypothesis generation in plant biology, since they do not require knowledge of the many parameters in molecular pathways and are also more intuitive than ODE models. Over the last several years Dr. Laubenbacher’s research group and his collaborators have developed a mathematical framework that accommodates discrete models of different types and provides sophisticated mathematical and computational tools for their construction and analysis. The new web-based software tool PlantSimLab uses this framework, together with a user interface that allows a plant biologist to construct models based solely on biological information and interrogate them in a manner similar to that of studying molecular networks experimentally. This software tool has been under development in close collaboration with the plant biology community. The focus is on the study of molecular networks relevant to plant biology.


Nabil Chaabane (Dept. of Mathematics)

A modified Taylor-Hood immersed finite element for Stokes interface problems [slides]

We present the bilinear immersed finite element (IFE) methods for solving Stokes interface problems with structured Cartesian meshes that is independent of the location and geometry of interface. Basic features of the bilinear IFE functions, including the unisolvent property, will be discussed. Numerical examples are provided to demonstrate that the bilinear IFE spaces have the optimal approximation capability, and that numerical solutions produced by a modified Taylor-Hood method with these IFE functions for Stokes interface problem also converge optimally in both $L^2$ and $H^1$ norms.

Mohamed Jrad (ESM Department)

Buckling Analysis of Curvilinearly Stiffened Composite Panels with Cracks [slides]

Stiffeners attached to composite panels may significantly increase the overall buckling load of the resultant stiffened structure. First, buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from ABAQUS FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners’ height is investigated. It is shown that there is an optimum value of stiffeners’ height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener’s height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. ABAQUS is used for these two examples and results show that panels with a larger crack have a reduced buckling load. It is shown also that the buckling load decreases slightly when using higher order 2D shell FEM elements.

01/30/2014 & 02/03/2014

Poster Design Help Sessions

Older grad students that are experienced in creating and giving poster presentations will lead this session, answer questions and provide help where necessary. Power Point and Latex templates are available upon request at any time. There will be no formal talk, so please bring your laptop with your current poster design and lots of questions!


Lizette Zietsman (ICAM & Math Department)

Why Should I Present a Poster?

Posters are an excellent and efficient way to publicize one's work, if they are done well. This presentation will focus on some basics of poster design. Since posters are only one of many means we have to communicate with others about our work and our field, a broader discussion of scientific communication will also touch on other aspects, such as oral presentations, publications, proposal writing, and informal communication.


Ujwal Krothapalli (ECE Department)

Machine Learning and Perception [slides]

How can we capture the human decision making process in an algorithm? How can we build systems that can see and perceive like humans. Machine Learning and Machine Vision offer paths to realize these goals. In this talk I will describe algorithms that can be used to describe and model human perception and human vision. Such models can be used to automate tasks and identify patterns and trends that exist in the vast amount of data that we come across everyday.

Arielle Grim-McNally (Dept. of Mathematics)

Utilizing Properties of Slater Matrices to Design Better Preconditioners for Quantum Monte Carlo Methods [slides]

Quantum Monte Carlo (QMC) methods are often used to solve problems involving many body systems. In implementing QMC methods, sequences of Slater matrices are constructed and solving the linear systems for each matrix is a major part of the cost. These matrices change by one row at a time corresponding to a small move by a particle. Preconditioned linear solves are utilized, reducing the cost from $O(n^3)$ to $O(n^2)$. We analyze the properties of Slater matrices, attempting to further reduce the cost. Various preconditioners have been examined including an ILUTP preconditioner that is recycled over a number of steps as well as an ILUTP preconditioner that is incrementally updated at each step. Recent work has also focused on a robust approximate inverse preconditioner. Application of these preconditioners to problems arising from predicting micro air vehicle aerodynamics is also considered.


Holly Grant (Dept. of Mathematics)

A viscoelastic constitutive model that generates the dynamics of a thixotropic yield stress fluid [slides]

Yield stress fluids require a certain amount of stress to be applied before they yield and flow. For a subclass called "thixotropic" yield stress fluids, the value of the yield stress depends on the amount of time that has passed since the fluid was last placed under stress. We investigate the homogeneous uniaxial extensional flow of a model viscoelastic fluid, with no built-in assumptions about the value of the yield stress. For a large relaxation time, there is a fast time scale for flow and a slow time scale for small changes in the microstructure. The distinct stages of evolution from equilibrium are characterized using multi-scale asymptotic analyses, together with numerical simulations, of the original equations. The time-dependent solutions reveal a number of features associated with thixotropic yield stress fluids, such as delayed yielding and hysteresis.


Dr. Raghu Pasupathy (ISE Department)

"Online" Quantile and Density Estimators

The standard estimator $q_{\alpha}(n)$ for the $\alpha$-quantile $q_{\alpha}$ of a random variable X, given n observations from the distribution of X, is obtained by inverting the empirical cumulative distribution function (cdf) constructed from the obtained observations. It is well-known that $q_{\alpha}(n)$ requires $O(n)$ storage, and that the mean squared error of $q_{\alpha}(n)$ (with respect to $q_{\alpha}$) decays as $O(n−1)$. In this talk, we present an alternative to $q_{\alpha}(n)$ that seems to require dramatically less storage with negligible loss in convergence rate. The proposed estimator, $\tilde{q}_{\alpha}(n)$, relies on an alternative cdf that is constructed by accumulating the observed random variates into variable-sized bins that progressively become finer around the quantile. The size of the bins are strategically adjusted to ensure that the increased bias due to binning does not adversely affect the resulting convergence rate. We will present an “online” version of the estimator $\tilde{q}_{\alpha}(n)$, along with results on its consistency, convergence rates, and storage requirements. If time permits, we will discuss analogous ideas for density estimation.


Adam Bowman (Dept. of Mathematics)

Toward a Rigorous Justification of the Three-Body Impact Parameter Approximation [slides]

Scattering experiments are one of our main sources of information about the behavior of the subatomic world. Quantum scattering theory is, like most of quantum mechanics, concerned with developing solutions to the time-dependent Schrodinger equation - something that turns out to be well-nigh impossible when multiple particles are involved. In this talk, I will consider a approximation technique in three-body quantum scattering wherein two particles are very large compared to the third. I list results that begin to provide a rigorous justification of the impact parameter model - which treats the heavy particles with classical mechanics and the light particle with quantum mechanics - by showing that certain elements of the impact parameter scattering matrix approach their full three-body analogs as the masses of the heavy bodies approach infinity.

Karim Fadhloun (Department of Civil Engineering)

A Microscopic and Macroscopic Analysis of Moving Bottlenecks [slides]

Causality analysis is an old aspiration. For many systems we have the fundamental rules such as conservation of energy, momentum and mass, which help us to investigate the role of each individual element in the whole process. For many other complex systems just few measurements are available and little information is accessible about the mutual interaction of the involved variables. Therefore, finding the causal relationship between interacting elements is very difficult. Because of these limitations, simple correlation metric is frequently considered as an indicator of causality, which obviously is not a good measure in case of nonlinear systems. A significant number of research efforts have attempted to study and analyze the case in which a vehicle is moving slower than the traffic stream (also known as a moving bottleneck) and its impact on the traffic stream behavior upstream, downstream and abreast the slow vehicle. This research is interested in the analysis of the above phenomenon both microscopically and macroscopically by mainly looking at the impact of change in the fundamental diagram on the flow passing the moving bottleneck. Subsequently, a general model that estimates the moving bottleneck passing rate using simulated data from the INTEGRATION software is developed. The proposed model accounts explicitly for the impact of the fundamental diagram on the estimated passing rates which are demonstrated to vary in a quadratic convex fashion as a function of the bottleneck speed. The resulting microscopic model is used to develop a framework that allows for modeling the moving bottleneck phenomenon from a macroscopic standpoint. Specifically, an explicit expression of the bottleneck diagram, a flow-density relationship that completely defines the phenomenon macroscopically, is proposed and the behavior of the traffic downstream and abreast of the moving obstruction is depicted. The developed framework ends two decades of confusion and conflict about modeling moving bottlenecks and constitutes a potential feasible description of the phenomenon.


Fall Break Trip to Industry

[Featured article on SIAM's main website]

Nineteen members of the SIAM Student Chapter at Virginia Tech attended a miniconference on jobs in industry and govenment agencies at the Arlington Virginia Tech Satellite Campus. The companies and agencies represented included

Every student had the opportunity to talk to various people who work in a non-academic setting, which made the whole event a great success. Thanks to Dr. Lizette Zietsman and Dr. Terry Herdman who helped from the initial idea of this conference with the planning and organization.


Fall Break Trip Info Session [slides]


Daniel Schmidt (Dept. of Mathematics)

How to move (or get trapped) in a random solid [slides]

The motion of an electron through a large, disordered solid, such as a metal alloy, is a surprisingly difficult problem. In theory, we could attempt to solve the Schrodinger equation directly to find the wave function of the electron. In reality, this is impractical, partly because we would need to consider specific configurations of individual ions, rather than bulk properties of the material. A better approach is to consider the statistical distribution of energy levels (which comes down to an eigenvalue problem). As it turns out, there is a useful connection between the statistics of energy levels and the dynamics of electrons occupying those levels. This allows us to determine electron behavior for some cases, but a number of problems are still unsolved. In particular, most models so far have ignored the force between electrons. The multi-particle models that do not make this simplification are an area of active research. The models described here are relevant to the study of conductivity of materials, including the theory of superconductors, which is still poorly understood. Similar techniques can be applied to motion of light through a medium with optical imperfections, as well as certain problems in quantum computing.

Amir E. Bozorg Magham (ESM Department)

Dynamical systems tools and Causality analysis [slides]

Causality analysis is an old aspiration. For many systems we have the fundamental rules such as conservation of energy, momentum and mass, which help us to investigate the role of each individual element in the whole process. For many other complex systems just few measurements are available and little information is accessible about the mutual interaction of the involved variables. Therefore, finding the causal relationship between interacting elements is very difficult. Because of these limitations, simple correlation metric is frequently considered as an indicator of causality, which obviously is not a good measure in case of nonlinear systems. The notion of using reconstructed phase space and convergent cross mapping for causality analysis is a new approach enabling us to have a better understanding about causal relationship in nonlinear systems. One important advantage of this method is that it does not require separation of variables which is essential for classical Granger’s method. In this talk, concepts of this method are described and some results for different systems are discussed.


Steven Boyce (Dept. of Mathematics)

Modeling the variations of students’ coordination of units [slides]

I build off of descriptions of a hypothetical learning trajectory by which the students’ reorganize their ways of thinking about number to coordinate additional levels of units. The number of levels of units are mathematics educators’ constructs for characterizing the psychological structures necessary for a robust (elementary) understanding the rational number system - one in which improper fractions and negative integers are full-fledged members. For example, three levels of units are theorized as necessary for immediately understanding the fraction 7/4 as 7 iterations of 1/4, 4 of which would make 1. I describe an initial approach for modeling the variations within sixth-grade students’ units coordinations across contexts by considering changes in students’ propensity for attending to and coordinating an additional level of units. I look forward to feedback from others also interested in applying mathematics to the teaching and learning of mathematics.

David Plaxco (Dept. of Mathematics)

How to develop a better understanding of linear independence of functions [slides]

In this talk, I will use examples of student work collected in my research with Dr. Wawro to discuss a theoretical framework in the mathematics education literature. From a group of freshmen students in an Honors Linear Algebra course, we collected responses to the question "Consider the functions $F(t) = (t,1), G(t) = (t^2,2)$, and $H(t) = (t^3,0)$ with $F,G,H: \mathbb{R} \rightarrow \mathbb{R}^2$. Would you say these functions are linearly dependent for all $t \in \mathbb{R}$? Explain your reasoning." Through our analysis of the students’ various approaches and the mathematics involved in this question, we found Sfard’s (1991) process/object framework to be useful in describing the mathematical constructs involved in developing a more powerful understanding of linear independence of functions.


Alex Grimm (ICAM & Dept. of Mathematics)

A Taming Method for Burgers Equation [slides]

The problem of establishing local existence and uniqueness of solutions to systems of differential equations is well understood and has a long history. However, the problem of proving global existence and uniqueness is more difficult and fails even for some very simple ordinary differential equations. It is still not known if the 3D Navier-Stokes equation have global unique solutions and this open problem is one of the Millennium Prize Problems. However, many of these mathematical models are extremely useful in the understanding of complex physical systems. For years people have considered methods for modifying these equations in order to obtain models that still capture the observed fundamental physics, but for which one can rigorously establish global results. In this thesis we focus on a taming method to achieve this goal and apply taming to to modeling and numerical problems. The method is applied to a class of nonlinear differential equations with conservative nonlinearities and to Burgers' Equation.

Michael Fischer (University of Stuttgart, Germany)

Model Order Reduction in Elastic Multibody Simulation [website] [more info]

The method of elastic multibody systems enables the analysis, simulation and optimization of mechanical systems, e.g. appearing in robotics or automotive and power engineering, with consideration of elastic deformations and non-linear rigid body motions. One major step towards an efficient simulation process is the reduction of the elastic degrees of freedom of the corresponding second order system. Different model order reduction techniques are developed, implemented and applied to large mechanical systems at the Institute of Engineering and Computational Mechanics at the University of Stuttgart in Germany. In this presentation, certain research topics in model reduction of elastic multibody systems, like frequency-weighted, automated and error controlled reduction are discussed. The reduction of industrial-sized systems with million degrees of freedom as well as the application of parametric reduction techniques to improve the simulation of moving loads will be presented.


Semester Kickoff Social (Potluck)

Come out to the Duck Pond to enjoy some time with your fellow graduate students. Learn about our SIAM student chapter and the activities and chances we offer. Please make this potluck a great event by sharing one of your favorite dishes.


Semester Kickoff Meeting

The SIAM student chapter would like to welcome every graduate student in Mathematics, and also interested students from other departments at the first meeting. In an open discussion, we are going to talk about what we stand for, what you can expect to see, and - most importantly - how YOU can participate. Also, we will have, David Wells, as the first speaker of the semester. Please mark your calendar and come by!

David Wells (Dept. of Mathematics)

Experiences as an Intern at Los Alamos National Laboratory

David Wells is a PhD student in the department who spent the summer at Los Alamos National Laboratory (LANL) in Los Alamos, New Mexico. He worked on a variety of projects related to scientific computing, such as porting legacy Fortran code to modern toolchains and the use of supercomputing and spectral methods for ocean modeling. He will discuss aspects of working and living in Los Alamos, a brief history and summary of the lab, as well as details of his work this summer.


Year End Social (Potluck)

About 25 graduate students gathered at the Duck Pond to celebrate the end of the semester. The weather was perfect so that everybody could enjoy the soft drinks, buns, hotdogs and burgers provided by our chapter and various dishes people brought. Thanks to everyone who brought some food, and thanks for a great semester with several interesting talks and our first SIAM poster session. We hope to welcome even more people to our activities next semester

The SIAM student chapter wants to thank everybody who participated in our events this semester, and we hope to see you again next semester. Special thank and recognition to our graduating committee members, our President, Vitor Nunes, and our Vice President Research, Kasie Farlow. Good luck in your future!


Vitor Nunes (ICAM & Dept. of Mathematics)

Fréchet Sensitivity Analysis and Parameter Estimation in Groundwater Flow Models [slides]

In this work we develop and analyze algorithms motivated by the parameter estimation problem corresponding to a multilayer aquifer/interbed groundwater flow model. The parameter estimation problem is formulated as an optimization problem, then addressed with algorithms based on adjoint equations, quasi-Newton schemes, and multilevel optimization. In addition to the parameter estimation problem, we consider properties of the parameter to solution map. This includes invertibility (known as identifiability) and differentiability properties of the map. For differentiability, we expand existing results on Fréchet sensitivity analysis the groundwater flow equation. This is achieved by proving that the Fréchet derivative of the solution operator is Hilbert-Schmidt, under smoothness assumptions for the parameter space. In addition, we approximate this operator by time dependent matrices, where their singular values and singular vectors converge to their infinite dimension peers. This decomposition proves to be very useful as it provides vital information as to which perturbations in the distributed parameters lead to the most significant changes in the solutions, as well as applications to uncertainty quantification. Numerical results complement our theoretical findings.

Meijing Zhang (Geosciences Department)

A More Accurate and Powerful Tool for Managing Groundwater Resources and Predicting Land Subsidence in Las Vegas Valley [slides]

In Las Vegas Valley land subsidence has been geodetically monitored since 1935, and due to intensive groundwater pumping since 1905, the compaction of the aquifer systems leads to several large subsidence bowls and destructive earth fissures. What makes the problem more complex is the heterogeneous aquifer system and variable thickness of delay interbeds that distribute among the aquifer. Several models for Las Vegas basin have been previously developed, however none of these previous models are able to accurately simulate observed subsidence patterns because of inadequate parameterization. To better manage groundwater resources and predict future subsidence we update and develop a more accurate groundwater management model for Las Vegas Valley that incorporates MODFLOW with the SUB (subsidence) and HFB (horizontal flow barrier) packages. With the advent of InSAR (Interferometric synthetic aperture radar), distributed spatial and temporal subsidence measurements can be obtained. We successfully implement UCODE with the discrete adjoint algorithm to automatically identify suitable zonations and inversely calculate parameter values from hydraulic head and subsidence measurements in synthetic model. This automation process can remove user bias and provide a far more accurate and robust parameter zonation distribution. With this automation process it’s anticipated that our work will yield a more accurate and powerful tool for managing groundwater resources in Las Vegas Valley to date.


Cory Brunson (Virginia Bioinformatics Institute & Dept. of Mathematics)

Caution in interpreting graph-theoretic diagnostics [slides]

Graphs, in their simplest form collections of nodes several pairs of which are tied, have become a universal framework for modeling and diagnosing complex networks. While they are usually leveraged to answer questions motivated by context, there has also emerged a general theory of real-world network structure. While a standard suite of graph-theoretic tools can distinguish among graphs constructed from different varieties of networks, much of this diversity has been shown to depend only on the type of richer network structure from which the graphs are constructed. I'll present our network of interest, the collaborative mathematics literature from 1985 to 2009. I'll discuss what we might infer (and have inferred) about how mathematicians collaborate from several standard diagnostics on the simple graph of coauthorship. I'll then introduce some new but related diagnostics motivated by the richer structure of paper–author attribution and discuss their implications for these inferences.

Matthew Oremland (Virginia Bioinformatics Institute & Dept. of Mathematics)

Capturing dynamics of agent-based models with difference equations [slides]

Agent-based models (ABMs) are computer simulations wherein individuals (agents) follow local rules. From these local rules, global dynamics emerge. ABMs can be used to simulate a wide variety of systems, and have been used in applications ranging from atomic interactions to social media (and everything in between). We investigate the attempt to transform ABMs into systems of difference equations for rigorous mathematical analysis. Equations describing the behavior of ABMs can be difficult to obtain analytically, and are often sensitive to changes in the ABM formulation. We build an example ABM and study the interplay between the rules of the model and the form of the equations.


The SIAM chapter congratulates Dr. John Burns for being conferred SIAM Fellowship for his contributions to control and approximation of partial differential equations. He is one of 33 distinguished members of SIAM nominated for exemplary research as well as outstanding service to the community. Dr. Burns is a professor in the Department of Mathematics and at the Interdisciplinary Center for Applied Mathematics (ICAM). His area of research spans applied and computational control, partial differential equations, distributed parameter systems, fluid/structural control systems, smart materials, and modeling and control of high performance buildings. Dr. Burns has served as Vice President of SIAM, was the founding Editor of the SIAM Book Series on Advances in Design and Control, and is a past Chair of the SIAM Activity Group on Systems and Control. He was awarded the SIAM Reid Prize in 2010.


Mustafa Ali Arat (Mechanical Engineering)

Control Applications in Vehicle Safety Systems [slides]

To assist the driver in emergency situations and prevent possible failure scenarios vehicles are integrated with electronic control units and additional actuators (e.g. anti-lock brakes (ABS), stability controllers (ESP, ESC), active suspensions), which are generally known as active vehicle safety systems. These control systems can intervene with the driver’s steering, braking or throttle inputs to improve stability and with the intention to avoid crash. This talk will provide an insight to the development of algorithms for these control systems with a special emphasis on the tire-road interaction, as tires are the primary generators of the forces and moments acting on the vehicle chassis and therefore can dramatically change vehicle response to driver inputs. The talk will be concluded with the validation procedure of such algorithms using numerical analysis or hardware in the loop testing.

Syed Amaar Ahmad (Electrical Engineering)

A Multi-Point Transmission Scheme with Power Adaptations for LTE Networks [slides]

The emergence of LTE-Advanced (4G) wireless networks promises to offer unprecedented increase in data rates, coverage and reliability to mobile users through the use of new technologies in wireless communications. In the LTE standards, the use of low-cost relays is proposed to enhance coverage for those mobile users that are located further away from a base station (cell tower). Such relays serve as intermediate destinations that decode-and-forward user data to a base station. However, currently a mobile user establish a communication link to only a single access point (either a relay or a base station) at a given time. In this presentation, we consider the potential benefit in allowing mobiles to split their transmit power over simultaneous links to the base station and the relay, in effect transmitting two parallel data flows. We model decisions by the mobiles as to: (i) which access point to connect to (either a relay or both a relay and the base station); and (ii) how to allocate transmit power over these links so as to maximize its total data rate. We show that this additional flexibility in the selection of access points leads to a substantial increase in the aggregate data rate for a cellular network.


Dr. Terry Herdman (Associate Vice President for Research Computing at VT, ICAM Director & Professor at the Dept. of Mathematics)

High Performance Computing at Virginia Tech and Research Projects at ICAM [slides]

Computational facilities and support servers provided by Advanced Research Computing (ARC) at Virginia Tech will be discussed. Advanced Research Computing at Virginia Tech is an innovative and interdisciplinary environment advancing computational science, engineering and technology. ARC provides computing and visualization resource and support for advance computational research. ARC offers educational programs and training on scientific computing, encouraging the development of knowledge and skills in computational tools and techniques for undergraduate, graduate and research faculty and staff. This presentation will also provide an overview of the research of the Interdisciplinary Center for Applied Mathematics (ICAM). We will present applications of mathematics to past and present research projects. The applications will include: airfoil flutter; mathematical models for design and control of large space based antenna systems; homeland security projects and design and control of energy efficient buildings.


Christopher Jarvis (ICAM & Dept. of Mathematics)

Inclusion of Actuator Dynamics in Optimal Control Problems [slides]

In many applications we want to affect the dynamics of a system by changing only the boundary. The abstract formulation for this problem has many difficulties, and may not reflect the physical implementation of such a system. In this talk we examine a method that models the physical control implementation and provide a simpler analytic framework at only a marginally increased cost. Examples and numerical results will be presented.

Allen Bowers (Civil Engineering)

A Mathematical Model to prevent icing of bridges [slides]

We are investigating the use of pile foundations to extract heat from the ground to prevent icing of bridge decks in the winter. In this concept, heat energy is extracted from the ground by a fluid that circulates through tubing in the pile foundation. This warm fluid is further circulated within a tubing system in the bridge deck slab to prevent icing. We are investigating this concept by field testing a fully instrumented model system and through numerical modelling. The field test is being used to both prove the concept and to verify the numerical model.

Our chapter is featured on the main SIAM website. Click here!


1st SIAM Research Symposium [abstracts]

Poster presentations are growing in popularity for both graduate and undergraduate research at conferences. Posters provide an excellent wat to introduce your research to others. Fourteen Math graduate students took this opportunity and presented their research to a broad audience of Department professors and other interested people. Overall, this first research symposium was a great success and we hope to have it again next year. A list of abstracts can be found here.

01/28 & 29/13

Poster Design Help Session (Seda Arat, Kasie Farlow, Claus Kadelka, Boris Kramer)

Older grad students that are experienced in creating and giving poster presentations will lead this session, answer questions and provide help where necessary. Power Point and Latex templates are available upon request at any time. There will be no formal talk, so please bring your laptop with your current poster design and lots of questions!


Dr. Dong-Yun Kim (Department of Statistics)

Application of Change Point in Time-to-Event Data [slides]

Change point occurs in many different areas of study, e.g., medical, industrial, environmental sciences, just to name a few. In statistics, due to mathematical difficulties involved, the theory of change point has seen a rather slow progress, in spite of its enormous potential in applications. In this talk I’ll illustrate some real-world examples where the change point problems naturally occur. Using the time-to-event data, I’ll discuss how the change point can be tested, and how it can be incorporated in the framework to construct better statistical models.

Boris Aguilar (Department of Computer Science)

Efficient computation of electrostatic interactions in Biomolecular Modeling via the generalized Born model [slides]

Constructing physically realistic, but at the same time computationally effective representation of the molecular surface and volume remains a challenge. Here we illustrate some of the challenges and describe a solution in the context of the generalized Born model -- the most popular analytical approximation for computing electrostatic interactions in bio-molecules in the presence of continuum (implicit) solvent. The proposed solution is based on the so called R6 effective Born radii computed as a $1/|r^6|$ (R6) integral over an approximation to the molecular volume. Details of the proposed approximation are discussed. The methodology is applied to molecules molecules of different sizes and shapes and the results are compared with experiments, as well as with more computationally expensive explicit solvent approaches in which individual water molecules are treated explicitly at atomic level. Overall, the proposed solution is capable of computing electrostatic interactions with a reasonable balance between efficiency and accuracy, especially if applied to small drug-like molecules where the accuracy of the proposed model is comparable to that of the traditional explicit solvent models that are orders of magnitude slower.


Madison Brandon (Virginia Bioinformatics Institute & Dept. of Mathematics)

A Discrete Model of the Iron Regulatory Network in Aspergillus Fumigatus [slides]

Aspergillus fumigatus is an opportunistic fungal pathogen responsible for invasive aspergillosis (IA) in immunocompromised patients. Currently there is a pressing need for better detection and treatment strategies for IA. Iron has been shown to be essential for A. fumigatus virulence in a mouse model. Therefore a more comprehensive understanding of A. fumigatus iron regulation could help to identify potential drug targets and improve treatment strategies for IA. One way to address this need is to build a computational model, which can act as a virtual laboratory to predict and test hypothesis. For this research, a discrete mathematical model of iron uptake and utilization by A. fumigatus was constructed based upon information in the literature. Simulation and techniques from computational algebra were used to analyze the model for novel behavior under both iron limiting and iron sufficient conditions. The model will be validated via in vitro experiments.

Dr. Anael Verdugo (Virginia Bioinformatics Institute)

Dynamics of a Gene Network Model with Time Delay [slides]

Dynamic modeling and analysis of biological networks has become a central component of applied mathematics. This has allowed mathematicians to develop interesting computational tools used to address specific questions about the dynamic properties of cellular processes. In this talk I will describe a dynamical systems approach for the study of a mRNA-protein model that incorporates negative feedback, nonlinear interactions, and delays. I will show how the system of delay differential equations undergoes an important transition from equilibrium to oscillations via a Hopf bifurcation by using a center manifold approximation. The final outcome results in closed form expressions for the limit cycle amplitude and frequency of oscillation, which are then used to study the Hopf point and prove that delays can drive oscillations in gene activity.


Dr. Reinhard Laubenbacher (Virginia Bioinformatics Institute & Dept. of Mathematics)

Scientific posters: the good, the bad, and the ugly [slides]

Dr. Laubenbacher has been a Professor at the Virginia Bioinformatics Institute and a Professor in the Department of Mathematics at Virginia Tech since 2001. He is also an Adjunct Professor in the Department of Cancer Biology at Wake Forest University in Winston-Salem (NC) and Affiliate Faculty in the Virginia Tech Wake Forest University School of Biomedical Engineering and Sciences. He was a Professor of Mathematics at New Mexico State University and was a visiting researcher at Los Alamos National Laboratories, the Mathematical Science Research Institute at Berkeley and Cornell. His research interests include

Abstract: Posters are an excellent and efficient way to publicize one's work, if they are done well. This presentation will focus on some basics of poster design. Since posters are only one of many means we have to communicate with others about our work and our field, a broader discussion of scientific communication will also touch on other aspects, such as oral presentations, publications, proposal writing, and informal communication.


Dr. Nicole Abaid (ESM Department)

Biologically-inspired mathematical modeling of fish schools [slides]

Coordinated motion in fish schools has inspired a variety of mathematical models for multi-agent systems. Control of such systems, in turn, finds applications in engineering and biological contexts, ranging from mobile robot platoons and wireless sensor networks to animal protection and production. However, many reduced-order models rely on user-defined behavioral rules which do not necessarily capitalize on evolutionarily-refined behavior observed in schools. Indeed, such optimal behaviors enable individual fish to swim more efficiently, forage more widely, and avoid predators more successfully as a member of a school. In this talk, we will discuss research findings from the Dynamical Systems Laboratory of the Polytechnic Institute of New York University on mathematical modeling of collective behavior in fish from both theoretical and experimental perspectives. We will introduce a stochastic network inspired by communication among schooling fish and use this network to build a mathematical model of fish schools in the absence and presence of leaders. Analytical results for consensus over such networks will be presented and experimental and numerical methodologies drawn from biology and computer science will be used for model assessment.

Nikolaos Artavanis (Finance Department)

Downside Risk and Long-Horizon Stock Return Reversals

We propose a rational, risk-based explanation for the long-horizon stock return reversal phenomenon of DeBondt and Thaler (1985). Specifically, we argue that long-horizon return reversals reflect a premium for bearing downside risk. Consistent with this hypothesis we find that downside betas of past losers are significantly greater than downside betas of past winners and the inclusion of downside beta in Fama-MacBeth regressions completely subsumes the reversal effect. We note that downside risk offers a theoretical justification for the "distress risk" explanation of the contrarian effect of Fama and French (1996). Consistent with this view we find that downside beta is more highly correlated with firm characteristics associated with distress (dividend reductions and delisting) and explains long-horizon return reversals better than SMB/HML and other proxies for distress risk in the literature.


South-Eastern Atlantic Regional Conference on Differential Equations (Wake Forest University, Winston-Salem, NC)

Members of the SIAM Student Chapter at Virginia Tech attended the South-Eastern Atlantic Regional Conference on Differential Equations at Wake Forest University, Winston-Salem, NC on the 19th and 20th of October 2012. We were happy to attend this well organized conference at WFU with interesting keynote speakers and a great atmosphere. Special thanks to WFU for travel support for all of our attendees. The following people gave presentations about their ongoing research: Erich Foster, Dave Wells, Taiga Wang, Boris Kramer, Vitor Nunes, Kasie Farlow, Nabil Chabaane.


Caleb Magruder and Dr. Saifon Chaturantabut (ICAM & Dept. of Mathematics)

Model Reduction

Large-scale simulations are critical in studying complex physical phenomena such as oceanography, weather prediction, circuit simulation and aerospace design. However, these simulations often lead to overwhelming demands on computational resources motivating model reduction. The goal is to produce a simpler "reduced-order" model which allows for much faster and cheaper simulation while providing high-fidelity approximation to the original model. Various dynamical system norms are developed to measure the efficacy of these approximations. Two major model reduction frameworks are discussed: interpolation and POD. We discuss sufficient conditions for optimal approximations within the interpolation framework. In addition, we will introduce some basic ideas concerning model reduction for nonlinear dynamical systems and then present a hybrid approach that combines the popular Proper Orthogonal Decomposition (POD) with a new method, the Discrete Empirical Interpolation Method (DEIM). The accuracy and efficiency of reduced-order dynamical systems derived with this hybrid POD-DEIM approach will be discussed and illustrated on two models: neuron modeling and nonlinear miscible flow in porous media. The talk will touch on topics in linear algebra, numerical analysis, complex analysis, optimization and differential equations.


Dr. Bill Henshaw (Centre for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore California)

Solving Fluid Structure Interaction Problems on Overlapping Grids [slides]

This talk will discuss the numerical solution of fluid structure interaction (FSI) problems on overlapping grids. Overlapping grids are an efficient, flexible and accurate way to represent complex, possibly moving, geometry using a collection of overlapping structured grids. For incompressible flows with moving geometry we have been developing a scheme that is based on an approximated-factored compact scheme for the momentum equations together with a multigrid solver for the pressure equation. The overall scheme is fourth-order accurate in space and (currently) second-order accurate in time. The scheme will be described and results will be presented for some three-dimensional (parallel) computations of flows with moving rigid-bodies. In recent work, we have also been developing an FSI scheme based on the use of deforming composite grids (DCG). In the DCG approach, moving boundary-fitted grids are used near the deforming solid interfaces and these overlap non-moving grids which cover the majority of the domain. For compressible flows and elastic solids we have derived a new interface projection scheme, based on the solution of a fluid-solid Riemann problem, that overcomes the well known "added-mass" instability for light solids. The FSI-DCG approach is described and validated for some fluid structure problems involving high speed compressible flows coupled to rigid and elastic solids. The interesting case of a shock hitting an ellipse of zero mass is also presented.


The first SIAM meeting of the semester was a joint meeting between SIAM, AWM and SGTA. Everybody was invited and encouraged to attend. The three organizations gave brief presentations about their purposes and plans for the semester.


Dr. Miroslav Stoyanov (Oak Ridge National Laboratory)

An Introduction to Uncertainty Quantification

Physical phenomena in science and engineering are usually modeled as deterministic mappings from a set of input parameters to a desired output solution (the maps usually involve solving a differential equation). However, in practice, exact values for the parameters are either unknown or we are interested in the behavior of the physical system for a large range of possible values. The field of Uncertainty Quantification recasts the model by representing the input parameters as random variables with appropriate probability distributions. Under the new model, the solution becomes random as well. Thus, we no longer look for a specific value of our output, but rather the statistical properties of the random solution. In this talk, we present a short introduction to the fundamental problem of Uncertainty Quantification. We present various ways to model uncertainty in the input parameters as well as several common techniques for propagating the uncertainty through the model.


Seda Arat (Virginia Bioinformatics Institute & Dept. of Mathematics)

Production of Greenhouse Gases in Lake Erie

Lake Erie is one of the Great Lakes in North America and has a favorable environment for agriculture. On the other hand, it has witnessed recurrent summertime oxygen depletion and related microbial production of greenhouse gases such as nitrous oxide ($N_2O$). In fact, $N_2O$ is an intermediate in denitrification, which is a microbial process of conversion of nitrate ($NO_3$) to nitrogen gas ($N_2$). This talk will introduce the gene regulatory network and its (discrete) mathematical model of Pseudomonas aeruginosa, one of the microbes performing denitrification in Lake Erie, and then analyze the model by changing the concentration level of some environmental parameters such as oxygen ($O_2$),nitrate ($NO_3$) and phosphorus ($P$) to see how these parameters affect the long- run behavior of the network. This model helps us generate some hypotheses for the reason of accumulation of greenhouse gases in Lake Erie, which is still unknown.

Katarzyna Swirydowicz (Virginia Bioinformatics Institute & Dept. of Mathematics)

The Discrete Jacobian

When one wants to understand the dynamics of a system of differential equations, one uses the Jacobian. The eigenvalues of Jacobian give us a lot of information about stability of steady states and possible bifurcations. In 1985 French researcher Francis Robert defined analogous tool for Boolean system and named it "discrete Jacobian" or "discrete Derivative" . This tool proved to be useful, yet had not been as powerful as its continuous big brother. After the hypothesis of Shih and Ho was proven, that is that if all eigenvalues of the discrete Jacobian in all the states are 0 (discrete spectral radius is 0), then the Boolean system has a unique steady state, it brought a lot of attention to the Discrete Jacobian as a tool of exploring the dynamics of Boolean systems. During my talk, I will introduce main definitions and show the most interesting theorems regarding the Discrete Jacobian.

Claus Kadelka (Virginia Bioinformatics Institute & Dept. of Mathematics)

Robustness Analysis of Gene Regulatory Networks in a Stochastic Discrete Dynamical Systems Framework

In experiments it has been discovered that the key step of gene expression is a strongly stochastic process. Further studies can then be realized either by conducting even more experiments, or by finding a stochastic modeling framework that captures the underlying features of gene regulation. For such a framework, both, exhaustive simulations and analytic calculations deliver desired results. However, with growing complexity, simulations become increasingly time-consuming and unsuited so that analytic results are preferable. The Laubenbacher group developed an easily comprehensible stochastic modeling framework that captures the ubiquitous uncertainty of gene regulatory networks. Also, the Derrida plot is a well-known indicator for the stability of a Boolean network. The difference between the Hamming distance of any configuration and the Hamming distance of the same configuration after applying given update rules, the so-called Derrida value, is generally small for networks that exhibit robust behavior and big for networks with more chaotic behavior. In published Boolean networks, one subclass of Boolean functions, the class of nested canalizing functions – introduced already in the 1940s in the context of gene regulation – has been found to be chosen particularly often to model update functions. Recently, this subclass has been partitioned further, by characterizing each function by its Hamming weight. Explicit formulas that enable the calculation of Derrida values in the context of the stochastic delay system have been found; first, for nested canalizing functions in general, and second, for nested canalzing functions of a particular Hamming weight. An analysis of the derived formulas shows that the Derrida values rise constantly in the Hamming weight, with functions of high Hamming weight having even bigger Derrida values than random functions. Therefore, networks based on nested canalizing functions do not exhibit very robust behavior in general but only those based on functions with a small Hamming weight. In conclusion, formulas for the Derrida values of any nested canalizing function have been discovered. Since this makes simulations dispensable, the application of Derrida plots for robustness investigations of complex networks has become much simpler.


Zhenwei Cao (Dept. of Mathematics)

Anderson localization on a 3D lattice

Random Schrodinger operators can be used to model electron propagating through random media. Anderson model is one such model on discrete lattice. The talk will introduce the physical and mathematical concepts related to Anderson Localization, and review some of the machinery developed to prove it. Then I will show how a Feynman diagrammatic expansion technique can be used to prove localization when the potentials at different cites are correlated..


Xu Zhang (Dept. of Mathematics)

Immersed Finite Element Methods for Solving Moving Interface Problems

In science and engineering, many simulations are carried out over domains consisting of multiple materials separated by curves/surfaces. This often leads to the so-called interface problems of partial differential equations whose coefficients are piecewise constants. Using conventional finite element methods, convergence cannot be guaranteed unless meshes are constructed according to the material interfaces. Geometrically, this means each element needs to be essentially on one side of a material interface. Due to this reason the mesh in a conventional finite element method for solving an interface problem has to be unstructured to handle non-trivial interface configurations. This restriction usually causes many negative impacts on the simulations if material interfaces evolve. In this presentation, we will discuss how the recently developed immersed finite elements (IFE) can alleviate this limitation of conventional finite element methods. We will present both semi-discrete and fully discrete IFE methods for solving parabolic equations whose diffusion coefficient is discontinuous across a time dependent interface. These methods can use a fixed structured mesh even the interface moves. Numerical examples will be provided to demonstrate features of these IFE methods.

Boris Kramer (ICAM & Dept. of Mathematics)

POD Based Model Reduction for a coupled Burgers Equation

The coupled Burgers equation is motivated by the Boussinesq equations that are often used to model the thermal-fluid dynamics of air in buildings. The design and implementation of controllers heavily relies on rapid solutions to complex models such as the Boussinesq equations which in this work is mimiced by the coupled Burgers Equation. Thus, we further examine the feasibility and efficiency of the Proper Orthogonal Decomposition (POD) for the coupled Burgers equation. First, Finite Element Data is constructed to generate the POD basis. Using POD, we reduce the system to a "minimal" number of ODEs and conduct numerous numerical studies comparing the POD and Group FE method. Further numerical experiments consider an application where the dynamics are projected on a POD basis and then the governing parameters of the system are varied.


Christopher Jarvis (ICAM & Dept. of Mathematics)

Model Reduction using POD

In many cases numerical solutions of complex nonlinear problems can be computationally expensive due to both spatial and temporal discretizations required to meet a desired accuracy. In contrast, much of the solution information lies within a subspace whose dimension is significantly lower than the full dimension of the discretization. The goal of Reduced Order Modeling (ROM) is to create a low dimensional model that contains a large percentage of the information from the full discretized system, while saving computational time and storage. Proper Orthogonal Decomposition (POD) is one model reduction technique. In the finite dimensional case, for any integer r less than the dimension of a standard numerical solution, POD generates a set of orthonormal basis functions that optimally represents a given set of data over any other basis of rank r in mean-squared error. To implement POD, a collection of data snapshots at various time points must be found. For this reason POD is also known as the method of snapshots. Typically the ensemble of data snapshots are all from the same problem with constant coefficients and parameters. During a design cycle it is useful to understand the impact of changing parameters as variables in the total design trade space. The desire is to use reduced order models that allow for quick analysis of impacts resulting from changing parameter values. Therefore we will study the accuracy of using POD reduced order models across a range of parameter values and potential improvements. To facilitate the study we use Burger's equation in one dimension ($u_{t} + u u_{x} - q u_{xx} =0$).

Dr. Brian Y. Lattimer (Dept. of Mechanical Engineering)

Real-Time Environment Mapping and Decision Making for Firefighting

Autonomous robots are being developed here to perform firefighting onboard ships. Navigation and firefighting activities require the use of various sensors to map the barriers in the environment as well as identify the location of the fire for suppression. This results in large matrices of data that require rapid manipulation and processing to support decision making, such as walking direction and location of suppression agent application. An overview of this project will be provided along with discussion on some challenges on data processing


Morgan Dominy (Dept. of Mathematics)

The Vigenère cipher

How it was considered unbreakable, and how it is easily broken. We will examine the use of statistical tests as well as the use the vector space $R^{26}$ and the statistical properties of English text to easily break a Vigenère cipher in a matter of seconds.

Hans-Werner van Wyk (ICAM & Dept. of Mathematics)

Dynamical Systems with Uncertain Parameters

Many natural phenomena whose behavior can be accurately described in terms of (partial) differential equations may nevertheless exhibit a degree of uncertainty, due to the statistical variability in the model parameters. We discuss methods for determining statistical properties of the model output when parameters are random. Often it is impossible to measure parameter values directly and we have to estimate them from (uncertain) measurements of related variables. This begs the question: is it possible to obtain a statistical description of the physical parameters, given some knowledge of the statistics of the measured (related) variable?


Weiwei Hu (ICAM & Dept. of Mathematics)

Feedback Control of the Boussinesq Equations with Implications for Sensor Location

In this talk, we present theoretical and numerical results for a feedback control problem governed by the Boussinesq equations. The problem is motivated by recent interest in designing and controlling energy efficient building systems. In particular, we show that it is possible to locally exponentially stabilize the nonlinear Boussinesq Equations by applying Neumann/Robin type boundary control on a bounded and connected domain. The feedback controller is obtained by solving a Linear Quadratic Regulator problem for the linearized Bounssinesq equations. The feedback functional gains provide insight into determining good spatial location of sensors for optimal operation of energy efficient buildings.

Erich Foster (ICAM & Dept. of Mathematics)

An Introduction to the Argyris Finite Element

The Argyris Finite Element is a high order fInite element containing 21 degrees of freedom and pentic basis functions. These 21 degrees of freedom allow for $C^1$ Finite Elements and order six rate of convergence in h. However, the addition of normal derivatives as a degree of freedom requires an extra transformation, beyond the standard affine transformation. In this talk I will introduce the Argyris Finite Element and discuss some pitfalls.


Vitor Nunes (ICAM & Dept. of Mathematics)

Groundwater flow modeling and sensitivity analysis.

In this presentation I will introduce the model of groundwater flow and the inverse problem associated with it. I will present the challenges of parameter estimation and sensitivity analysis as a strategy to get better results.

Austin Amaya (Dept. of Mathematics)

Beurling-Lax and Linear System Representations of Shift-Invariant Spaces: Continuous Time/Half-Plane Versions.

We define forward and backward shift operators acting on $L^2$ of the imaginary axis. We consider pairs of subspaces which together form a direct-sum decomposition of $L^2$ and such that the first space is forward-shift invariant and the second is backward shift-invariant. We present a theorem, following work of Ball and Helton in 1984, that any such pair of subspaces can be characterized as the image under a multiplication operator of the right and left half-plane Hardy Spaces. We go on to provide a further representation of our subspace pairs in terms of operators that show up in the study of linear systems, the so-called observation and control operators. In particular, we do this in the context of $L^2$ well-posed continuous-time linear systems as developed by Staffens in 2005. We summarize the development of these linear systems, as the continuous-time context differs significantly from the more commonly studied discrete-time context.

last updated: September 3, 2015