Loading…
JuliaCon 2017 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Talk [clear filter]
Wednesday, June 21
 

9:30am

Pkg3: Julia's New Package Manager
This talk covers the design and implementation of Pkg3, the third (and hopefully final!) major iteration of Julia's built-in package manager. We'll begin with some history: what worked and didn't work in the two previous iterations of the package manager. Pkg3 tries to marry the better parts of systems like Python's virtualenv and Rust's cargo, while supporting federated and layered package registries, and supporting interactive usage as well as reproducible environments and reliable deployment of code in production. We'll nerd out a bit with some graph theory and how difficult it is to select compatible sets of package versions, and how much harder still it is to make version resolution understandable and predictable. But it won't be all theory – we'll also cover imminently practical subjects like "how do I install packages?"


Speakers
SK

Stefan Karpinski

Julia Computing, Inc. / NYU
co-creator of Julia, co-founder of Julia Computing


Wednesday June 21, 2017 9:30am - 10:06am
West Pauley Pauley Ballroom, Berkeley, CA

10:40am

Miletus: A Financial Modelling Suite in Julia
Miletus is a financial software suite in Julia, with a financial contract specification language and extensive modelling features. In this talk, we’ll discuss the design principles involved in how to model a contract from primitive components, and how Julia’s language features lend themselves intuitively to this task. We’ll then talk about the various features of the software suite such as closed form models, binomial trees and computation of price sensitivities (aka “the Greeks”), providing several examples and code snippets, along with comparisons with other popular frameworks in this space.


Speakers
avatar for Ranjan Anantharaman

Ranjan Anantharaman

Applications Engineer, Julia Computing
Ranjan Anantharaman is a data scientist at Julia Computing. His interests span applied mathematics and numerical computing, and he enjoys working with computation across a variety of fields and domains.
SB

Simon Byrne

Julia Computing, Inc.
Dr Simon Byrne is a quantitative software developer at Julia Computing, where he implements cutting edge numerical routines for statistical and financial models. Simon has a Ph.D. in statistics from the University of Cambridge, and has extensive experience in computational statistics... Read More →


Wednesday June 21, 2017 10:40am - 11:16am
West Pauley Pauley Ballroom, Berkeley, CA

10:40am

Query.jl: Query Almost Anything in Julia
Query is a package for querying julia data sources. Its role is similar to LINQ in C# and dplyr in R. It can filter, project, join and group data from any iterable data source. It has enhanced support for querying arrays, DataFrames, DataTables, TypedTables, IndexedTables and any DataStream source (e.g. CSV, Feather, SQLite etc.). The package also defines an interface for tabular data that allows a) dispatch on any tabular data source and b) simple conversions of tabular data representations. The talk will first introduce Query from a user perspective and highlight different examples of queries that the package makes feasible. The second half of the talk will dive deep into the internals of the package and explain the various extension points that package provides.


Speakers
avatar for David Anthoff

David Anthoff

Assistant Professor, UC Berkeley
David Anthoff is an environmental economist who studies climate change and environmental policy. He co-develops the integrated assessment model FUND that is used widely in academic research and in policy analysis. His research has appeared in Science, Nature Climate Change, the Journal... Read More →


Wednesday June 21, 2017 10:40am - 11:16am
East Pauley Pauley Ballroom, Berkeley, CA

1:30pm

AoT or JIT : How Does Julia Work?
Julia uses a unique mix of techniques adopted from conventional static and dynamic to provide a special blend of high-performance and flexible compute kernels. This allows it to simultaneously have a fully ahead-of-time-compiled code model – while permitting (even encouraging) code updates at runtime – and a fully runtime-interpreted interface – while permitting extensive compile-time optimization. In this talk, I will examine some of the trade-offs and limitations this requires of user code, especially on common first-class code evaluation features – such as eval and incremental pre-compilation – as well as advanced features – such as @generated functions and @pure. We will also try to take a look at the internal layout and implementation of some of these data structures, and how the compiler works to maintain their correctness over time, despite other changes to the system.


Speakers
avatar for Jameson Nash

Jameson Nash

Julia Computing
Full-time JuliaLang contributor, working on compiler integration, language design, and performance tooling.


Wednesday June 21, 2017 1:30pm - 2:06pm
East Pauley Pauley Ballroom, Berkeley, CA

1:30pm

Using Parallel Computing for Macroeconomic Forecasting at the Federal Reserve Bank of New York
This talk will give an overview of how researchers at the Federal Reserve Bank of New York have implemented economic forecasting and other post-estimation analyses of dynamic stochastic general equilibrium (DSGE) models using Julia’s parallel computing framework.

This is part of the most recent release of our DSGE.jl package, following our ports of the DSGE model solution and estimation steps from MATLAB that were presented at JuliaCon in 2016. I will discuss the technical challenges and constraints we faced in our production environment and how we used Julia’s parallel computing tools to substantially reduce both the time and memory usage required to forecast our models. I will present our experiences with the different means of parallel computing offered in Julia - including an extended attempt at using DistributedArrays.jl - and discuss what we have learned about parallelization, both in Julia and in general.addition, I will provide some of our new perspectives on using Julia in a production setting at an academic and policy institution.

DSGE models are sometimes called the workhorses of modern macroeconomics, applying insights from microeconomics to inform our understanding of the economy as a whole. They are used to forecast economic variables, investigate counterfactual scenarios, and understand the impact of monetary policy. The New York Fed’s DSGE model is a large-scale model of the U.S. economy, which incorporates the zero lower bound, price/wage stickiness, financial frictions, and other realistic features of the economy. Solving, estimating, and forecasting it presents a series of high-dimensional problems which are well suited for implementation in Julia.

This talk reflects the experience of the author and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.


Speakers
avatar for Pearl Li

Pearl Li

Research Analyst, Federal Reserve Bank of New York
I'm a Research Analyst at the New York Fed using Julia to estimate and forecast macroeconomic models. I'm a 2016 graduate of the University of Pennsylvania, where I studied math and economics. I'm interested in applying the frontier of scientific computing to economic research.


Wednesday June 21, 2017 1:30pm - 2:06pm
West Pauley Pauley Ballroom, Berkeley, CA

2:06pm

Programming NVIDIA GPUs in Julia with CUDAnative.jl
GPUs have typically been programmed using low-level languages like CUDA and OpenCL, providing full control over the hardware at the expense of developer efficiency. CUDAnative.jl makes it possible to program GPUs directly from Julia, in the case you need the flexibility to write your own kernel functions, without having to fall back to CUDA C or binary libraries.

In this talk, I will give an overview of CUDAnative.jl with its features and restrictions, explain the technology behind it, and sketch our future plans.


Speakers
avatar for Tim Besard

Tim Besard

PhD student, Ghent University
PhD student on compilation techniques for high-level languages, working on the Julia programming language and its CUDAnative.jl GPU back-end.


Wednesday June 21, 2017 2:06pm - 2:42pm
East Pauley Pauley Ballroom, Berkeley, CA

2:06pm

The Dolo Modeling Framework
We present a family of three Julia packages that together constitute a complete framework to describe and solve rational expectation models in economics. Dolang.jl is an equation parser and compiler that understands how to compile latex-like strings describing systems of equations into efficient Julia functions for evaluating the levels or derivatives of the equations. Dolo.jl leverages Dolang and implements a variety of frontier algorithms for solving a wide class of discrete time, continuous control rational expectations models. Finally, Dyno.jl builds upon Dolang to implement a Julia prototype of the Matlab-based dynare software library used extensively throughout academia and the public sector to approximate the solution to and estimate rational expectations models.


Speakers
avatar for Spencer Lyon

Spencer Lyon

PhD student (economics), NYU
Economics Ph.D. student at NYU Stern. Active Julia member since 0.2


Wednesday June 21, 2017 2:06pm - 2:42pm
West Pauley Pauley Ballroom, Berkeley, CA

3:40pm

Equations, inequalities and global optimisation: guaranteed solutions using interval methods and constraint propagation
How can we find all solutions of a system of nonlinear equations, the “feasible set” satisfied by a collection of inequalities, or the global optimum of a complicated function? These are all known to be difficult problems in numerical analysis.

In this talk, we will show how to solve these, in a guaranteed way, using a collection of related methods based on interval arithmetic, provided by the IntervalArithmetic.jl package. The starting point is a simple dimension-independent bisection code, which can be enhanced in a variety of ways. This method is rigorous: it is guaranteed to find all roots, or to find the global minimum, respectively.

One key idea is the use of continuous constraint propagation, which allows us to remove large portions of the search space that are infeasible. We will explain the basics of this method, in particular the “forward-backward contractor”, and describe the implementation in the IntervalConstraintProgramming.jl package.

This package generates forward and backward code automatically from a Julia expression, using metaprogramming techniques. These are combined into “contractors”, i.e. operators that contract a box without removing any portion of the set of interest. These, in turn, give a rigorous answer to the question whether a given box lies inside the feasible set or not. In this way, a paving (collection of boxes) is built up that approximates the set.

Speakers
avatar for David Sanders

David Sanders

Associate Professor, Universidad Nacional Autonoma de Mexico (UNAM)
David P. Sanders is associate professor of computational physics in the Department of Physics of the Faculty of Sciences at the National University of Mexico in Mexico City. His Julia tutorials on YouTube have a total of over 100,000 views and he is a principal author of the Juli... Read More →


Wednesday June 21, 2017 3:40pm - 4:16pm
West Pauley Pauley Ballroom, Berkeley, CA

3:40pm

Flux: Machine Learning with Julia
Flux.jl is a new Julia package for machine learning. It aims to provide strong tooling and support for debugging, high-level features for working with very complex networks, and state of the art performance via backends like TensorFlow or MXNet, while also providing a very high level of interoperability so that approaches can easily be mixed and matched. This talk will introduce Flux from the ground up and demonstrate some of its more advanced features.


Speakers
avatar for Mike Innes

Mike Innes

Julia Computing, Inc.
I work with Julia Computing on the Flux machine learning library.


Wednesday June 21, 2017 3:40pm - 4:16pm
East Pauley Pauley Ballroom, Berkeley, CA

4:16pm

Knet.jl: Beginning Deep Learning with 100 Lines of Julia
Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. Knet uses dynamic computational graphs generated at runtime for automatic differentiation of (almost) any Julia code. This allows machine learning models to be implemented by only describing the forward calculation (i.e. the computation from parameters and data to loss) using the full power and expressivity of Julia. The implementation can use helper functions, loops, conditionals, recursion, closures, tuples and dictionaries, array indexing, concatenation and other high level language features, some of which are often missing in the restricted modeling languages of static computational graph systems like Theano, Torch, Caffe and Tensorflow. GPU operation is supported by simply using the KnetArray type instead of regular Array for parameters and data. High performance is achieved using custom memory management and efficient GPU kernels.


Speakers
avatar for Deniz Yuret

Deniz Yuret

Associate Professor, Koc University
Deniz Yuret received his BS, MS, and Ph.D. at MIT working at the AI Lab on machine learning and natural language processing during 1988-1999. He co-founded Inquira, Inc., a startup commercializing question answering technology which was later acquired by Oracle. He is currently an... Read More →


Wednesday June 21, 2017 4:16pm - 4:52pm
East Pauley Pauley Ballroom, Berkeley, CA

4:16pm

LightGraphs: Our Network, Our Story
Our talk discusses the development and origin of LightGraphs, current features, and future developments. We introduce the package's major design choices in a historical context as a compromise between the three core LightGraphs goals of simplicity, performance, and flexibility. We highlight several areas where specific features of Julia have led to flexible and efficient implementations of graph algorithms.will highlight our work in centrality measures, graph traversals, and spectral graph algorithms as examples of areas where Julia's performance and design decisions have allowed LightGraphs to provide best-in-class implementations of graph algorithms. We also discuss integration with other organizations – JuliaOpt for matching and flow problems, and the Julia data visualization ecosystem – and highlight specifically LightGraphs' potential to provide leadership on performant graph visualization., we speculate on the influence of Julia's focus on elegant parallel processing to future development of the package.


Speakers
avatar for Seth Bromberger

Seth Bromberger

Lawrence Livermore National Laboratory
avatar for James Fairbanks

James Fairbanks

Research Engineer, GTRI
James Fairbanks earned a Ph.D in Computational Science and Engineering at Georgia Tech. My research focuses on numerical, statistical, and streaming algorithms for data analysis. The applications include complex networks, online media, medical data, and scientific sensor data.


Wednesday June 21, 2017 4:16pm - 4:52pm
West Pauley Pauley Ballroom, Berkeley, CA

4:52pm

Modern Machine Learning in Julia with TensorFlow.jl
By many measures, TensorFlow has grown over the last year to become the most popular library for training machine-learning models. TensorFlow.jl provides Julia with a simple yet feature-rich interface to TensorFlow that takes advantage of Julia's multiple dispatch, just-in-time compilation, and metaprogramming capabilities to provide unique capabilities exceeding TensorFlow's own native Python API. This talk will demonstrate TensorFlow.jl by guiding listeners through training a realistic model of image captioning , showing how to 1) construct the model with native Julia control flow and indexing, 2) visualize the model structure and parameters in a web browser during training, and 3) seamlessly save and share the trained model with Python. No prior experience with TensorFlow is assumed.


Speakers
JM

Jonathan Malmaud

MIT
Ph.D. candidate at MIT studying artificial intelligence


Wednesday June 21, 2017 4:52pm - 5:28pm
East Pauley Pauley Ballroom, Berkeley, CA
 
Thursday, June 22
 

9:35am

The State of the Type System
Julia 0.6 includes a long-needed overhaul of the type system. While the effects of this change are not always visible, the new system eliminates classes of bugs and increases the expressiveness of types and method signatures. I plan to briefly explain how the new system works and what you can do with it. But more importantly, I want to ask: where do we go from here? Will we ever need another overhaul? I'll present some possible future features and other related speculations. Topics may include record types, more powerful tuple types, protocols, ugly corner cases, and method specificity and ambiguity.


Speakers
JB

Jeff Bezanson

Julia Computing
Jeff is one of the creators of Julia, co-founding the project at MIT in 2009 and eventually receiving a Ph.D. related to the language in 2015. He continues to work on the compiler and system internals, while also working to expand Julia's commercial reach as a co-founder of Julia... Read More →


Thursday June 22, 2017 9:35am - 10:05am
West Pauley Pauley Ballroom, Berkeley, CA

10:52am

GLVisualize 1.0
GLVisualize is a visualization framework written purely in Julia + OpenGL. There are a lot of new changes that I want to talk about: New trait system for more modularity and code clarity Different backends for GLVisualize - conquering the Web & PDFs! A new API for simpler drawing Tight integration with GPUArrays, pre-processing on the GPU Higher level plotting interface


Speakers
SD

Simon Danisch

MIT
Developer of GLVisualize & GPUArrays


Thursday June 22, 2017 10:52am - 11:28am
East Pauley Pauley Ballroom, Berkeley, CA

10:52am

Turing: a Fresh Approach to Probabilistic Programming
Turing is a new probabilistic programming language (PPL) based on Julia, a framework which allows users to define probabilistic models and perform inference automatically. Thanks to Julia's meta-programming support, Turing has a very friendly front-end modelling interface. Meanwhile, coroutines are used in Turing's inference engine development to achieve the state-of-the-art sampling performance. Also, we have recently introduced a new Gibbs interface, which allows user to compose different samplers and run them in the same time. In this talk, we will discuss our motivation of developing Turing in Julia, introduce the design and architecture of Turing, and present some practical examples of how probabilistic modelling is performed in Turing.


Speakers
avatar for Kai Xu

Kai Xu

University of Cambridge
Developers of the Turing project form Cambridge Machine Learning Group


Thursday June 22, 2017 10:52am - 11:28am
West Pauley Pauley Ballroom, Berkeley, CA

11:28am

Fast Multidimensional Signal Processing with Shearlab.jl
The Shearlet Transform was proposed in 2005 by the Professor Gitta Kutyniok (http://www3.math.tu-berlin.de/numerik/mt/mt/www.shearlet.org/papers/SMRuADaSO.pdf) and her colleagues as a multidimensional generalization of the Wavelet Transform, and since then it has been adopted by a lot of Companies and Institutes by its stable and optimal representation of multidimensional signals. Shearlab.jl is a already registered Julia package (https://github.com/arsenal9971/Shearlab.jl) based in the most used implementation of Shearlet Transform programmed in Matlab by the Research Group of Prof. Kutyniok (http://www.shearlab.org/software); improving it by at least double the speed on different experiments.examples of applications of Shearlet Transform one has Image Denoising, Image Inpaiting and Video Compression; for instance I used it mainly to reconstruct the Light Field of a 3D Scene from Sparse Photographic Samples of Different Perspectives with Stereo Vision purposes. A lot of research institutes and companies have already adopted the Shearlet Transform in their work (e.g. Fraunhofer Institute in Berlin and Charité Hospital in Berlin, Mathematical Institute of TU Berlin) by its directional sensitivity, reconstruction stability and sparse representation.


Speakers
avatar for Héctor Andrade Loarca

Héctor Andrade Loarca

PhD Candidate, TU Berlin
Ph.D. student in Mathematics at the Technical University of Berlin (TUB) with Professor Gitta Kutyniok as advisor; major in Mathematics and Physics from National University of México (UNAM); ex Data Scientist of a mexican Open Governance Start Up (OPI); with experience in Data Mining... Read More →


Thursday June 22, 2017 11:28am - 12:04pm
West Pauley Pauley Ballroom, Berkeley, CA

11:28am

QML.jl: Cross-platform GUIs for Julia
The QML.jl (https://github.com/barche/QML.jl) package enables using the QML markup language from the Qt library to build graphical user interfaces for Julia programs. The package follows the recommended Qt practices and promotes separation between the GUI code and application logic. After a short introduction of these principles, the first topic of this talk will be the basic communication between QML and Julia, which happens through Julia functions and data (including composite types) stored in context properties. Using just a few basic building blocks, this makes all of the QML widgets available for interaction with Julia. The next part of the talk deals with Julia-specific extensions, such as the Julia ListModel, interfacing with the display system and GLVisualize and GR.jl support. These features will be illustrated using live demos, based on the examples in the QML.jl repository. Finally, some ideas for extending and improving the package will be listed, soliciting many contributions hopefully.target audience for this talk is anyone interested in developing GUIs for their Julia application with a consistent look on OS X, Linux and Windows. All user-facing code is pure Julia and QML, no C++ knowledge is required to use the package.


Speakers
avatar for Bart Janssens

Bart Janssens

Major, Royal Military Academy
I am an associate professor at the mechanics department of the Royal Military Academy. For my Ph.D., I worked on Coolfluid, a C++ framework for computational fluid dynamics with a domain specific language. My interest in Julia is sparked by its powerful metaprogramming functionality... Read More →


Thursday June 22, 2017 11:28am - 12:04pm
East Pauley Pauley Ballroom, Berkeley, CA

2:15pm

OhMyREPL.jl: This Is My REPL; There Are Many Like It, But This One Is Mine
By default, Julia comes with a powerful REPL that itself is completely written in Julia. It has, among other things, tab completion, customizable keybindings and different prompt modes to use the shell or access the help system. However, with regards to visual customization there are not that many options for a user to tweak. To that end, I created the package OhMyREPL.jl. Upon loading, it hooks into the REPL and adds features such as syntax highlighting, matching bracket highlighting, functionality to modify input and output prompts and a new way of printing stacktraces and error messages. It also contains some non-visual features, like allowing text that has been copied from a REPL session to be directly pasted back into a REPL and quickly opening the location of stack frames from a stacktrace in an editor. The talk will give an overview of the different features, discuss which features managed to get upstreamed to Julia v0.6 and, if time allows, outline the internals of the package.


Speakers
avatar for Kristoffer Carlsson

Kristoffer Carlsson

Julia Computing
Ph.D. student in computational mechanics at Chalmers University of Technology. Using Julia both for studies and as a hobby.


Thursday June 22, 2017 2:15pm - 2:51pm
East Pauley Pauley Ballroom, Berkeley, CA

2:15pm

Stochastic Optimization Models on Power Systems
We will present 3 tools for decision making under uncertainty in the power systems area: SDDP, a tool for optimal hourly operation of complex power systems; OptGen, a computational tool for determining the least-cost expansion of a multi-regional hydrothermal system; OptFlow, a mathematical model to optimize operation of a generation/transmission system with AC electrical network constraints.models have been used by system operators, regulators and investors in more than seventy countries in the Americas, Asia-Pacific, Europe and Africa, including some of the largest hydro based systems in the world, such as the Nordic pool, Canada, the US Pacific Northwest and Brazil. SDDP is also the model used by the World Bank staff in their planning studies of countries in Asia, Africa and Latin America. OptGen had some interesting applications regional studies such as the interconnection of Central America, the Balkan regions, the interconnection of nine South American countries, Africa (Egypt-Sudan-Ethiopia and Morocco-Spain) and Central Asia. The original version of all 3 models was written in FORTRAN with the aid of some modelling tool or higher level API: AMPL for OptFlow, Mosel for OptGen and COIN-OR API for SDDP. Similar to any software, maintaining the code and adding new features became increasingly complex because they have to be built upon older data structures and program architectures.concerns motivated PSR to develop an updated version of these programs written entirely in julia (with JuMP and MathProgBase) for three basic reasons: (i) the code is concise and very readable; (ii) the availability of an advanced optimization “ecosystem”; and (iii) excellent resources for distributed processing (CPUs and GPUs). We retained the use of Xpress by developing the Xpress.jl library. We also use MPI.jl for distributed processing (including multiple servers in AWS).computational performance of the new code is matches the current ones’, which is very encouraging given that the current FORTRAN code has been optimized for several years based on thousands of studies. Also, the julia code incorporates several new modeling features that were easy to implement in all the 3 models: including SDP and SOCP relaxations for OPF and SDDiP method for stochastic integer optimization, confirming our expectation of faster model development.new models were incorporated to an integrated planning system for Peru being developed by PSR, which will be delivered in August 2017. They are also being internally tested as a “shadow” to the current version for studies in several countries and was delivered for beta testing for some PSR clients. The official release is scheduled for the end of 2017.


Speakers
JG

Joaquim Garcia

PSR Inc.
Joaquim has a BSc degree in electrical engineering and a BSc degree in mathematics, both from PUC -Rio and is currently working towards a PhD in electrical engineering with emphasis on decision support, also at PUC-Rio. During his undergraduate studies, he attended a year at UC Santa... Read More →
avatar for Camila Metello

Camila Metello

PSR
Camila graduated as an industrial engineer and has a MSc in Decision Analysis from PUC-Rio. Attended UC Berkeley for a semester during under graduation. Joined PSR in 2013, where, at present, works with the development of the models of optimization of hydrothermal dispatch under uncertainty... Read More →


Thursday June 22, 2017 2:15pm - 3:03pm
West Pauley Pauley Ballroom, Berkeley, CA

3:45pm

Julia for Fully Homomorphic Encryption: Current Progress and Challenges
Fully homomorphic encryption (FHE) is a cryptographic technique allowing a user to run arbitrary computations over encrypted data. This is particularly useful for computing statistical analytics over sensitive data. In this work, we introduce a Julia module, Fhe.jl, which supports running Julia functions over an FHE-encrypted data set. We do so by using symbolic execution to convert a Julia function into its circuit representation, which we then evaluate over the encrypted data. In this talk, we will discuss the progress we have made so far, some of the challenges we have run into, and how we hope to work with the Julia community to continue our efforts.


Speakers
JC

Jose Calderon

Galois, Inc.
José Manuel Calderón Trilla is a Research Scientist at Galois, Inc. working on Compilers, Static Analysis, and Formal Methods. He received his Ph.D. from the University of York in the UK for his work on Implicit Parallelism in lazy functional languages.
AJ

Alex J. Malozemoff

Galois, Inc.
Alex J. Malozemoff is a research scientist at Galois, Inc., with a focus on cryptography and computer security. He received his Ph.D. from the University of Maryland in 2016.


Thursday June 22, 2017 3:45pm - 4:21pm
East Pauley Pauley Ballroom, Berkeley, CA

3:45pm

Modia: A Domain Specific Extension of Julia for Modeling and Simulation
Modia is a Julia package to model and simulate physical systems (electrical, mechanical, thermo-dynamical, etc.) described by differential and algebraic equations. A user defines a model on a high level with model components (such as a mechanical body, an electrical resistance, or a pipe) that are physically connected together. A model component is constructed by "expression = expression" equations. The defined model  is symbolically processed, JIT compiled and simulated with Sundials IDA solver with the KLU sparse matrix package. By this approach it's possible and convenient to build models with hundred thousands of equations describing the dynamics of a car, an airplane, a power plant, etc. and simulate them. The authors used previous experience from the design of the modeling language Modelica (www.Modelica.org) to develop Modia.

In the presentation it is shown how a user can build models and simulate physical systems, including mechanical systems and electrical circuits. Furthermore, the design of Modia is sketched: The Modia language is a domain specific extension of Julia using macros. With graph theoretical algorithms, some of them recently developed by the authors, equations are pre-processed (including analytic differentiation if necessary) and transformed into a special form that can be simulated by IDA. Hereby the sparsity structure of the original (Modia) equations, as well as the nature of array equations are kept intact.

Speakers
avatar for Hilding Elmqvist

Hilding Elmqvist

CEO, Mogram AB
Hilding Elmqvist attained his Ph.D. at the Department of Automatic Control, Lund Institute of Technology in 1978. His Ph.D. thesis contains the design of a novel object-oriented model language called Dymola and algorithms for symbolic model manipulation. It introduced a new modeling... Read More →


Thursday June 22, 2017 3:45pm - 4:21pm
West Pauley Pauley Ballroom, Berkeley, CA

4:21pm

Julia for Infrastructure: Experiences in Developing a Distributed Storage Service
Julia is a language designed for numerical computing and it does that job pretty well. However, the emphasis on numerical computing and data science tends to overshadow the language’s other use cases. In this talk we share our experiences using Julia to build a distributed data fabric using commodity hardware. A data fabric is a distributed storage system that abstracts away the physical infrastructure and makes data available to applications using well known protocols such as NFS or S3. Our talk focuses on how we use Julia to implement a data fabric with specific examples. We will discuss some of the shortcomings and how we circumvented them. Finally we close by a cost benefit analysis of developing in Julia and how it can be a critical advantage in bringing products to market.


Speakers
avatar for Ajay Mendez

Ajay Mendez

Founder, Kinant
Ajay works on systems and infrastructure software for fun and profit. He has dabbled in operating systems, memory allocators, file systems and distributed systems. He founded kinant.com in 2017 to simplify the deployment and usage of storage infrastructure.


Thursday June 22, 2017 4:21pm - 4:57pm
East Pauley Pauley Ballroom, Berkeley, CA

4:21pm

Mixed-Mode Automatic Differentiation in Julia
Julia's unique execution model, metaprogramming facilities, and type system make it an ideal candidate language for native automatic differentiation (AD). In this talk, we'll discuss a variety of Julia-specific tricks employed by ForwardDiff and ReverseDiff to differentiate user-provided Julia functions. Topics covered include the implementation of a native Julia execution tracer via operator overloading, functor-based directives for specialized instruction taping, SIMD vectorization and instruction elision for inlined dual number operations, and vectorized differentiation of linear algebraic expressions. I'll close the talk with a glimpse into the future of AD in Julia and JuMP, highlighting the effect new features may have on other downstream projects like Celeste, Optim and RigidBodyDynamics.


Speakers
avatar for Jarrett Revels

Jarrett Revels

MIT
I like to make Julia code differentiate itself.


Thursday June 22, 2017 4:21pm - 4:57pm
West Pauley Pauley Ballroom, Berkeley, CA
 
Friday, June 23
 

9:35am

Taking Vector Transposes Seriously
from @jiahao:have really thought carefully about what the transpose of a vector should mean in a programming language. The pre-0.6 behavior that vector'vector yields a vector, vector' yields a matrix, and vector'' yields a matrix are all bad mathematics and produced no shortage of confusion by end users.present a summary of our research at the MIT Julia Labs into issue #4774, as a language design question that is informed by a comprehensive understanding of user expectations. Our main result is a short proof that it is impossible to avoid either new types, "ugly mathematics" (violation of Householder notation) or type instability. A single Array type is incompatible with Householder notation that produces the expected types from typical linear algebraic expressions. Furthermore, Householder notation intrinsically requires a conflation of 1x1 matrices and true scalars.also provide historical evidence the notion of "ugly mathematics" is neither static nor objective. In reality, linear algebra has changed greatly over the past centuries, demonstrating the impermanence of even elementary concepts of what matrices and vectors are and how they have been influenced by notation - a discussion forced into consciousness through the lens of programming language design, types, and formal program semantics.review the resolution of #19670 in the context of other designs in other programming languages, showing that all these designs turn out to locally optimal in conflating as much of Householder notation and array semantics as possible.work with Alan Edelman, Andy Ferris, and a few other people.


Speakers
avatar for Jiahao Chen

Jiahao Chen

Data Science Manager, Capital One
Data Scientist at Capital One, formerly Research Scientist at MIT


Friday June 23, 2017 9:35am - 10:05am
West Pauley Pauley Ballroom, Berkeley, CA

10:40am

Full Stack Web Development with Genie.jl
Julia has great potential in the web space thanks to its concise and friendly syntax, the powerful REPL, Unicode support, cross-platform availability, the efficiently compiled code and its parallel and distributed computing models. Low-level libraries like HttpServer and WebSockets are available, but they leave the developers having to spend a lot of time writing glue and boilerplate code: a tedious, inefficient and error-prone task.

Genie is a new web framework that leverages Julia's unique combination of features and its extensive collection of packages to empower developers to create powerful web apps in less time and with less code. It glues low-level libraries and contributes its own middlewares to expose a coherent and efficient workflow and a rich API for building web applications.

This talk will give you the guided tour, introducing the MVC stack and its main components, showing you how to quickly bootstrap a new Genie app and how to easily implement CRUD operations to expose resources over the internet, in an efficient and secure manner. You will see how easy it is to use Genie's API in tandem with Julia's modules system to hook up your code - allowing you to focus on your software's value proposition instead of wasting precious time dealing with the low-level details of transporting bytes over the wire.


Speakers
avatar for Adrian Salcenau

Adrian Salcenau

Adrian Corvin Salceanu -- Y3340743F
Experienced web developer, architecting and building performance-critical web apps that handle large amounts of real-time data. Using Julia to tackle web development's own two-language problem (productive-slow-interpreted vs unproductive-fast-compiled). CTO at OLBG. Organizer of Barcelona... Read More →


Friday June 23, 2017 10:40am - 11:16am
West Pauley Pauley Ballroom, Berkeley, CA

10:40am

HiFrames: High Performance Distributed Data Frames in Julia
Data frames are essential tools for data scientists, but existing data frames packages in Julia (and other languages) are sequential and do not scale to large data sets. Alternatively, data frames in distributed frameworks such as Spark are slow and not integrated with other computations flexibly. We propose a novel compiler-based approach where we integrate data frames into the High Performance Analytics Toolkit (HPAT) to build HiFrames. It automatically parallelizes and compiles relational operations along with other array computations in end-to-end data analytics programs, and generates efficient MPI/C++ code. We demonstrate that HiFrames is significantly faster than alternatives such as Spark on clusters, without forcing the programmer to switch to embedded SQL for part of the program. HiFrames is 3.6x to 70x faster than Spark SQL for basic relational operations, and can be up to 20,000x faster for advanced analytics operations, such as weighted moving averages (WMA), that the map-reduce paradigm cannot handle effectively. We will discuss how Julia’s powerful macro and compilation system facilitates developing HiFrames.


Speakers
ET

Ehsan Totoni

Intel Labs
Ehsan Totoni is a Research Scientist at Intel Labs. He develops programming systems for large-scale HPC and big data analytics applications with a focus on productivity and performance. He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in... Read More →


Friday June 23, 2017 10:40am - 11:16am
East Pauley Pauley Ballroom, Berkeley, CA

11:16am

Image Quilting: Building 3D Geological Models One Tile at a Time
ImageQuilting.jl is a high-performance implementation of texture synthesis and transfer for 3D images that is capable of matching pre-existing data in the canvas where the image is to be synthesized. It can optionally make use of GPUs through the OpenCL standard and is being currently used by the industry for fast generation of 3D geological models. In this talk, I will demonstrate some of the applications of this package in energy resources engineering and hydrogeology, and will highlight the qualities of the Julia programming language that enabled an unprecedented speed in this famous computer vision algorithm.


Speakers
avatar for Júlio Hoffimann

Júlio Hoffimann

Ph.D. candidate, Stanford University
I am a Ph.D. candidate in the Department of Energy Resources Engineering at Stanford University. In my research, I study the links between surface processes (i.e. flow and sediment transport) at the surface of the Earth and the resulting geostatistical properties at its subsurface... Read More →


Friday June 23, 2017 11:16am - 11:52am
East Pauley Pauley Ballroom, Berkeley, CA

1:42pm

COBRA.jl: Accelerating Systems Biology

Biologists in the COnstraint-Based Reconstruction and Analysis (COBRA) [7] community are gearing up to develop computational models of large and huge-scale biochemical networks with more than one million biochemical reactions. The growing model size puts a strain on efficient simulation and network exploration times to the point that accelerating existing COBRA methods became a priority. Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks has long been hampered by performance limitations of current implementations in Matlab/C (The COBRA Toolbox [8] and fastFVA [3]) or Python (cobrapy [2]). Julia [1] is the language that fills the gap between complexity, performance, and development time. DistributedFBA.jl [4], part of the novel COBRA.jl package, is a high-level, high-performance, open-source Julia implementation of flux balance analysis, which is a linear optimization problem. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes using optimization solver interfaces implemented in MathProgBase.jl [5]. Julia’s parallelization capabilities led to a speedup in latency that follows Amdahl’s law. For the first time, a flux variability analysis (two flux balance analyses on each biochemical reaction) on a model with more than 200k biochemical reactions [6] has been performed. With Julia and COBRA.jl, the reconstruction and analysis capabilities of large and huge-scale models in the COBRA community are lifted to another level. Code and benchmark data are freely available on github.com/opencobra/COBRA.jl References:

  • [1] Bezanson, Jeff and Edelman, Alan and Karpinski, Stefan and Shah, Viral B., “Julia: A Fresh Approach to Numerical Computing”, arXiv:1411.1607 [cs] (2014). arXiv: 1411.1607
  • [2] Ebrahim, Ali and Lerman, Joshua A. and Palsson, Bernhard O. and Hyduke, Daniel R., “COBRApy: COnstraints-Based Reconstruction and Analysis for Python”, BMC Systems Biology 7 (2013), pp. 74.
  • [3] Gudmundsson, Steinn and Thiele, Ines, “Computationally efficient flux variability analysis”, BMC Bioinformatics 11, 1 (2010), pp. 489.
  • [4] Heirendt, Laurent and Thiele, Ines and Fleming, Ronan M. T., “DistributedFBA.jl: high-level, high-performance flux balance analysis in Julia”, Bioinformatics btw838 (2017).
  • [5] Lubin, Miles and Dunning, Iain, “Computing in Operations Research using Julia”, INFORMS Journal on Computing 27, 2 (2015), pp. 238–248. arXiv: 1312.1431
  • [6] Magnúsdóttir, Stefanía and Heinken, Almut and Kutt, Laura and Ravcheev, Dmitry A. and Bauer, Eugen and Noronha, Alb…, “Generation of genome-scale metabolic reconstructions for 773 members of the human gut microbiota”, Nat Biotech 35, 1 (2017), pp. 81–89.
  • [7] Palsson, Bernhard Ø, Systems Biology: Constraint-based Reconstruction and Analysis (Cambridge, England: Cambridge University Press, 2015).
  • [8] Schellenberger, Jan and Que, Richard and Fleming, Ronan M. T. and Thiele, Ines and Orth, Jeffrey D. and Feist, Adam M. and Ziel…, “Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox v2.0”, Nat. Protocols 6, 9 (2011), pp. 1290–1307. 00182

Speakers
avatar for Laurent Heirendt

Laurent Heirendt

Research Associate, University of Luxembourg / LCSB
Laurent Heirendt was born in 1987 in Luxembourg City, Luxembourg (Europe). He received his BSc in Mechanical Engineering from the Ecole Polytechnique Fédérale de Lausanne, Switzerland in 2009. A year later, he received his MSc in Advanced Mechanical Engineering from Imperial College... Read More →


Friday June 23, 2017 1:42pm - 2:18pm
East Pauley Pauley Ballroom, Berkeley, CA

1:42pm

Julia: The Type of Language for Mathematical Programming
Julia was designed to be the right language for programming mathematics. In this talk, I’ll argue that its sophisticated type system allows mathematicians to program in the same way they write mathematics. This simplicity has two consequences. First, it has made Julia an attractive ecosystem in which to write mathematical packages: Julia is now the language with the most comprehensive, robust, and user-friendly ecosystem of packages for mathematical programming (or optimization, in modern lingo). Second, it has made Julia the right language in which to express many mathematical problems. The lightweight type system makes it easy to write code that is clearer than pseudocode.talk will present three case studies in optimization. We hope the audience will leave the talk with a new appreciation of Julia’s type system, as well as a new toolkit of packages to use for data fitting and optimization.1. Convex is a widely used library for convex optimization in Julia. In that package, the type system is used to create and recursively analyze the abstract syntax tree representing an optimization problem. Notions such as the sign of a real number, or the convexity or concavity of a function, are represented as types; and the convexity of an expression can be analyzed using a simple recursion over the tree of types.2. LowRankModels is a statistical package for imputing missing entries in large, heterogeneous tabular data set. LowRankModels uses type information about a DataFrame to automatically select the appropriate optimization problem to solve in order to find the best completion for the data table. These optimization problems are parametrized by a set of loss functions and regularizers. Using the type system, we are able to write algorithms that work seamlessly for any loss function or regularizer a user may dream up.3. Sketched approximations are a class of fast algorithms for producing a low rank approximation to a matrix - like an eigenvalue decomposition, but faster. We’ll show how to use parametric types to write all the special cases of the algorithm without introducing redundant code. Notably, these parametric types make it easier to understand the flow of the algorithm, and have essentially no analogue in “pseudocode” notation. Together with Julia’s simple mathematical syntax and support for unicode (eg, Greek) letters, we’ll see that the Julia code functions not only as an implementation of the method, but as a better version of pseudocode.


Speakers
MU

Madeleine Udell

Assistant Professor, Cornell University
Madeleine Udell is Assistant Professor of Operations Research and Information Engineering and Richard and Sybil Smith Sesquicenteial Fellow at Cornell University. She studies optimization and machine learning for large scale data analysis and control, with applications in marketing... Read More →


Friday June 23, 2017 1:42pm - 2:18pm
West Pauley Pauley Ballroom, Berkeley, CA

2:18pm

TaylorIntegration.jl: Taylor's Integration Method in Julia
In this talk we shall present TaylorIntegration.jl, an ODE integration package using Taylor's method in Julia. The main idea of Taylor's method is to approximate locally the solution by means by a high-order Taylor expansion, whose coefficients are computed recursively using automatic differentiation techniques. One of the principal advantages of Taylor's method is that, whenever high accuracy is required, the order of the method can be increased, which is more efficient computationally than taking smaller time steps. The accuracy of Taylor's method permits to have round-off errors per integration step. Traditionally, it has been difficult to make a generic Taylor integration package, but Julia permits this beautifully. We shall present some examples of the application of this method to ODE integration, including the whole computation of the Lyapunov spectrum, use of jet transport techniques, and parameter sensitivity. Open issues related to improving performance will be described.


Speakers
avatar for Luis Benet

Luis Benet

Dr, UNAM
I am a physicist interested in precise integrations of Solar System minor bodies. I am author of TaylorSeries.jl, TaylorIntegration.jl, TaylorModels.jl, and ValidatedNumerics.jl.
JA

Jorge Antonio Pérez Hernández

PhD candidate, UNAM
Jorge Perez is a Physics Ph.D. student at UNAM, Mexico, under supervision of Luis Benet and David P. Sanders, authors of TaylorSeries.jl and ValidatedNumerics.jl. His Ph.D. research project is related to understanding the dynamics of minor Solar System objects: comets, asteroids... Read More →


Friday June 23, 2017 2:18pm - 2:54pm
West Pauley Pauley Ballroom, Berkeley, CA

3:52pm

Event-based Simulation of Spiking Neural Networks in Julia
Information in the brain is processed by the coordinated activity of large neural circuits. Neural network models help to understand, for example, how biophysical features of single neurons and the network topology shape the collective circuit dynamics. This requires solving large systems of coupled differential equations which is numerically challenging., we introduce a novel efficient method for numerically exact simulations of sparse neural networks that bring to bear Julia’s different data structures and high performance. The new algorithm reduces the computational cost from O(N) to O(log(N)) operations per network spike. This is achieved by mapping the neural dynamics to pulse-coupled phase oscillators and using mutable binary heaps for efficient state updates. Thereby numerically exact simulations of large spiking networks and the characterization of their chaotic phase space structure become possible. For example, calculating the largest Lyapunov exponent of a spiking neural network with one million neurons is sped up by more than four orders of magnitude compared to previous implementations in other programming languages (C++, Python, Matlab).


Speakers
avatar for Rainer Engelken

Rainer Engelken

MPI for Dynamics and Self-Organization
Rainer just finished his Ph.D. in at the Max Planck Institute for Dynamics and Self-Organization (Göttingen) on 'Chaotic neural circuit dynamics' after studying physics at various places. He has been using Julia since 2014, as it minimizes both programming time and CPU time and allows... Read More →


Friday June 23, 2017 3:52pm - 4:28pm
West Pauley Pauley Ballroom, Berkeley, CA

3:52pm

GraphGLRM: Making Sense of Big Messy Data
Many projects in research and development require analysis of tabular data. For example, medical records can be viewed as a collection of variables like height, weight, and age for different patients. The values may be boolean (yes or no), numerical (100.3), categorical (A, B, O), or ordinal (early, middle, late). Some values may also be missing. However, analysis and feature extraction is made easier by knowing relationships between variables, for example, that weight increases with height. GraphGLRM is a framework that leverages structure in data to de-noise, compress, and estimate missing values. Using Julia’s flexibility and speed, we developed this package quickly and with sufficient performance for real-world data processing needs. GraphGLRMs are now robust and versatile enough to work with sparse, heterogeneous data. We will also discuss updates to Julia data structures and tooling that would ease package development and further empower the GraphGLRM framework.    More about GraphGLRMs: https://github.com/mihirparadkar/GraphGLRM.jl    More about LowRankModels: https://github.com/madeleineudell/LowRankModels.jl

Speakers
MP

Mihir Paradkar

Cornell University
Mihir Paradkar recently graduated from Cornell University in Biological Engineering. He has been user of Julia since v0.3.5 and is a developer of GraphGLRM.jl and LowRankModels.jl


Friday June 23, 2017 3:52pm - 4:28pm
East Pauley Pauley Ballroom, Berkeley, CA

4:28pm

Building End to End Data Science Solutions in the Azure Cloud with Julia
Increasingly organizations are using cloud platforms to store their data and perform analytics driven by cost, scale, and manageability considerations. Business applications are being retooled to leverage the vast enterprise / public data, artificial intelligence (AI), and machine learning (ML) algorithms. To build and deploy large scale intelligent applications, data scientists and analysts today need to be able to combine their knowledge of analytical languages and platforms like Julia with that of the cloud.this talk, data scientists and analysts will learn how to build end-to-end analytical solutions using Julia on scalable cloud infrastructure. Developing such solutions usually requires one to understand how to seamlessly integrate Julia with various cloud technologies. After attending the talk, the attendees should have a good understanding of all the major aspects needed to start building intelligent applications on the cloud using Julia, leveraging appropriate cloud services and tool-kits. We will also briefly introduce the Azure Data Science Virtual Machine DSVM which provides a comprehensive development/experimentation environment with several pre-configured tools to make it easy to work with different cloud services (SQL Data Warehouse, Spark, Blobs etc.) from Julia and other popular data analytics languages. Join this demo heavy session where we cover the end to end data science life-cycle and show how you can access storage and compute services on the Azure cloud using Julia from the DSVM. A self-guided tutorial building upon the examples in the demo will be published online for attendees to continue their learning offline.


Speakers
avatar for Udayan Kumar

Udayan Kumar

Microsoft
Udayan is a Software Engineer with Algorithms and Data Science group at Microsoft. Before coming to Microsoft, he was designing predictive algorithms to detect threats and malignant apps at a mobile security startup in Chicago. He has a MS and a Ph.D. in Computer Engineering from... Read More →


Friday June 23, 2017 4:28pm - 5:04pm
East Pauley Pauley Ballroom, Berkeley, CA

4:28pm

The Present and Future of Robotics in Julia
We (Twan and Robin) are graduate students in the Robot Locomotion Group at MIT. Our research focuses on modeling and optimization for the simulation and control of walking (and sometimes flying) robots. We've been using Julia in our research over the past year, and we're excited to share what we've learned, what we've built, and what we're hoping to see in the future of Julia., we'd like to share some of our work on: Robot dynamics and simulation in Julia: https://github.com/tkoolen/RigidBodyDynamics.jl 3D visualization and manipulation of robot models from Julia: https://github.com/rdeits/RigidBodyTreeInspector.jl https://github.com/rdeits/DrakeVisualizer.jl Optimization in Julia: https://github.com/rdeits/NNLS.jl Collision algorithms in Julia: https://github.com/rdeits/EnhancedGJK.jl https://github.com/rdeits/AdaptiveDistanceFields.jlwould also like to talk about how some of the best parts of the Julia ecosystem have made our work possible, like JuMP.jl, ForwardDiff.jl, and StaticArrays.jl., finally, we plan to discuss what we hope to see in Julia's future, including what the role of Julia can be inside a real-time robot controller.


Speakers
RD

Robin Deits

PhD student, MIT
I'm a graduate student in the Robot Locomotion Group at MIT, working on simulation, planning, and control of walking and flying robots. I'm particularly interested in footstep planning, push-recovery for bipeds, and applications of learning to robotics.
TK

Twan Koolen

MIT
We're graduate students in the Robot Locomotion Group at MIT, where we work on simulation, planning, and control of walking and flying robots.


Friday June 23, 2017 4:28pm - 5:04pm
West Pauley Pauley Ballroom, Berkeley, CA