NLPModels.jl: Data Structures for Optimization Models

NLPModels

This package provides general guidelines to represent non-linear programming (NLP) problems in Julia and a standardized API to evaluate the functions and their derivatives. The main objective is to be able to rely on that API when designing optimization solvers in Julia.

How to Cite

If you use NLPModels.jl in your work, please cite using the format given in CITATION.bib.

Optimization Problems

Optimization problems are represented by an instance of (a subtype of) AbstractNLPModel. Such instances are composed of

  • an instance of NLPModelMeta, which provides information about the problem, including the number of variables, constraints, bounds on the variables, etc.
  • other data specific to the provenance of the problem.

See the documentation for details on the models and the API.

Installation

pkg> add NLPModels

Models

This package provides no models, although it allows the definition of manually written models.

Check the list of packages that define models in this page of the docs

Main Methods

If model is an instance of an appropriate subtype of AbstractNLPModel, the following methods are normally defined:

  • obj(model, x): evaluate f(x), the objective at x
  • cons(model x): evaluate c(x), the vector of general constraints at x

The following methods are defined if first-order derivatives are available:

  • grad(model, x): evaluate ∇f(x), the objective gradient at x
  • jac(model, x): evaluate J(x), the Jacobian of c at x as a sparse matrix

If Jacobian-vector products can be computed more efficiently than by evaluating the Jacobian explicitly, the following methods may be implemented:

  • jprod(model, x, v): evaluate the result of the matrix-vector product J(x)⋅v
  • jtprod(model, x, u): evaluate the result of the matrix-vector product J(x)ᵀ⋅u

The following method is defined if second-order derivatives are available:

  • hess(model, x, y): evaluate ∇²L(x,y), the Hessian of the Lagrangian at x and y

If Hessian-vector products can be computed more efficiently than by evaluating the Hessian explicitly, the following method may be implemented:

  • hprod(model, x, v, y): evaluate the result of the matrix-vector product ∇²L(x,y)⋅v

Several in-place variants of the methods above may also be implemented.

The complete list of methods that an interface may implement can be found in the documentation.

Attributes

NLPModelMeta objects have the following attributes (with S <: AbstractVector):

AttributeTypeNotes
nvarIntnumber of variables
x0Sinitial guess
lvarSvector of lower bounds
uvarSvector of upper bounds
ifixVector{Int}indices of fixed variables
ilowVector{Int}indices of variables with lower bound only
iuppVector{Int}indices of variables with upper bound only
irngVector{Int}indices of variables with lower and upper bound (range)
ifreeVector{Int}indices of free variables
iinfVector{Int}indices of visibly infeasible bounds
nconInttotal number of general constraints
nlinIntnumber of linear constraints
nnlnIntnumber of nonlinear general constraints
y0Sinitial Lagrange multipliers
lconSvector of constraint lower bounds
uconSvector of constraint upper bounds
linVector{Int}indices of linear constraints
nlnVector{Int}indices of nonlinear constraints
jfixVector{Int}indices of equality constraints
jlowVector{Int}indices of constraints of the form c(x) ≥ cl
juppVector{Int}indices of constraints of the form c(x) ≤ cu
jrngVector{Int}indices of constraints of the form cl ≤ c(x) ≤ cu
jfreeVector{Int}indices of "free" constraints (there shouldn't be any)
jinfVector{Int}indices of the visibly infeasible constraints
nnzoIntnumber of nonzeros in the gradient
nnzjIntnumber of nonzeros in the sparse Jacobian
nnzhIntnumber of nonzeros in the sparse Hessian
minimizeBooltrue if optimize == minimize
islpBooltrue if the problem is a linear program
nameStringproblem name

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Download Details:

Author: JuliaSmoothOptimizers
Source Code: https://github.com/JuliaSmoothOptimizers/NLPModels.jl 
License: View license

#julia #models #optimization 

What is GEEK

Buddha Community

NLPModels.jl: Data Structures for Optimization Models
 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Cyrus  Kreiger

Cyrus Kreiger

1617959340

4 Tips To Become A Successful Entry-Level Data Analyst

Companies across every industry rely on big data to make strategic decisions about their business, which is why data analyst roles are constantly in demand. Even as we transition to more automated data collection systems, data analysts remain a crucial piece in the data puzzle. Not only do they build the systems that extract and organize data, but they also make sense of it –– identifying patterns, trends, and formulating actionable insights.

If you think that an entry-level data analyst role might be right for you, you might be wondering what to focus on in the first 90 days on the job. What skills should you have going in and what should you focus on developing in order to advance in this career path?

Let’s take a look at the most important things you need to know.

#data #data-analytics #data-science #data-analysis #big-data-analytics #data-privacy #data-structures #good-company

Enterprise Data Management: Stick to the Basics

Lots of people have increasing volumes of data and are trying to run data management programs to better sort it. Interestingly, people’s problems are pretty much the same throughout different sectors of any industry, and data management helps them configure solutions.

The fundamentals of enterprise data management (EDM), which one uses to tackle these kinds of initiatives, are the same whether one is in the health sector, a telco travel company, or a government agency, and more! Therefore, the fundamental practices that one needs to follow to manage data are similar from one industry to another.

For example, suppose you’re about to set off and design a program. In this case, it may be your integration platform project or your big warehouse project; however, the principles for designing that program of work is pretty much the same regardless of the actual details of the project.

#big data #bigdata #big data analytics #data management #data modeling #data governance #enterprise data #enterprise data management #edm

Cyrus  Kreiger

Cyrus Kreiger

1618039260

How Has COVID-19 Impacted Data Science?

The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.

Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.

#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt