Recently Amazon announced that AWS Lambda customers can now enable functions to access Amazon Elastic File System (Amazon EFS). With the support for EFS, they can share data across function invocations, read large reference data files, and write function output to a persistent and shared data store.

Until recently, Lambda functions could only access 512 MB /tmp directory storage, which was sufficient for a lot of use cases. For machine learning use-cases, however, Lambda functions were not an option as models like TensorFlow, which are often Gigabytes (GBs) in size, cannot fit in the /tmp directory storage. Moreover, it wasn’t an option either when Lambda functions processed large amounts of data (GBs) and needed to store it on the **/tmp **directory storage for easy access. Fortunately, now customers can mount a file system with the EFS support and provide a local path to read and write data at low latency.

To use AWS Lambda with Amazon EFS, developers will need an EFS Access Point, an application-specific entry point in an EFS file system. This entry point includes the operating system user and group to use when accessing the file system and file system permissions, and can limit access to a specific path in the file system. Application code and file system configuration are decoupled in this manner.

To leverage an EFS file system with Lambda functions, a developer needs to create a file system first using the EFS console, and specify an Amazon Virtual Private Cloud, which is necessary to allow the function to reach EFS mount targets. Next, the developer can add the access point and finally review the configuration before hitting create. Subsequently, the developer heads over to the Lambda console to specify the same VPC connection, its subnets, and security group before adding the file system in the new File system section of the function configuration.

#amazon #cloud #aws lambda #aws #development #devops #news

Amazon Announces Elastic File System (EFS) Support for AWS Lambda
1.40 GEEK