Usint Active Storage Direct Upload Amazon Web Services React
Active Storage Overview
This guide covers how to attach files to your Active Tape models.
Later reading this guide, you volition know:
- How to attach i or many files to a record.
- How to delete an attached file.
- How to link to an attached file.
- How to use variants to transform images.
- How to generate an image representation of a non-image file, such equally a PDF or a video.
- How to send file uploads directly from browsers to a storage service, bypassing your awarding servers.
- How to make clean up files stored during testing.
- How to implement back up for additional storage services.
Chapters
- What is Active Storage?
- Requirements
- Setup
- Disk Service
- S3 Service (Amazon S3 and S3-compatible APIs)
- Microsoft Azure Storage Service
- Google Deject Storage Service
- Mirror Service
- Public access
- Attaching Files to Records
-
has_one_attached
-
has_many_attached
- Attaching File/IO Objects
-
- Removing Files
- Serving Files
- Redirect mode
- Proxy way
- Authenticated Controllers
- Downloading Files
- Analyzing Files
- Displaying Images, Videos, and PDFs
- Lazy vs Immediate Loading
- Transforming Images
- Previewing Files
- Directly Uploads
- Usage
- Cross-Origin Resource Sharing (CORS) configuration
- Straight upload JavaScript events
- Example
- Integrating with Libraries or Frameworks
- Testing
- Discarding files created during tests
- Adding attachments to fixtures
- Implementing Back up for Other Cloud Services
- Purging Unattached Uploads
1 What is Agile Storage?
Active Storage facilitates uploading files to a deject storage service similar Amazon S3, Google Cloud Storage, or Microsoft Azure Storage and attaching those files to Active Record objects. It comes with a local disk-based service for development and testing and supports mirroring files to subordinate services for backups and migrations.
Using Active Storage, an application can transform image uploads or generate prototype representations of not-epitome uploads like PDFs and videos, and extract metadata from arbitrary files.
i.1 Requirements
Various features of Active Storage depend on third-party software which Rails will not install, and must be installed separately:
- libvips v8.half-dozen+ or ImageMagick for prototype analysis and transformations
- ffmpeg v3.4+ for video previews and ffprobe for video/audio analysis
- poppler or muPDF for PDF previews
Image analysis and transformations also require the image_processing
precious stone. Uncomment it in your Gemfile
, or add it if necessary:
gem "image_processing" , ">= 1.2"
Compared to libvips, ImageMagick is better known and more widely available. However, libvips tin can exist up to 10x faster and eat 1/10 the memory. For JPEG files, this can be farther improved past replacing libjpeg-dev
with libjpeg-turbo-dev
, which is 2-7x faster.
Before you install and use third-party software, make certain you understand the licensing implications of doing so. MuPDF, in particular, is licensed under AGPL and requires a commercial license for some use.
2 Setup
Active Storage uses three tables in your awarding's database named active_storage_blobs
, active_storage_variant_records
and active_storage_attachments
. Afterward creating a new application (or upgrading your application to Rails five.ii), run bin/track active_storage:install
to generate a migration that creates these tables. Apply bin/rail db:migrate
to run the migration.
active_storage_attachments
is a polymorphic join table that stores your model's class name. If your model'southward class name changes, you lot will need to run a migration on this table to update the underlying record_type
to your model'south new class name.
If you are using UUIDs instead of integers equally the master cardinal on your models you will need to change the column type of active_storage_attachments.record_id
and active_storage_variant_records.id
in the generated migration appropriately.
Declare Active Storage services in config/storage.yml
. For each service your application uses, provide a proper noun and the requisite configuration. The instance below declares three services named local
, exam
, and amazon
:
local : service : Disk root : <%= Track.root.bring together("storage") %> test : service : Disk root : <%= Rails.root.join("tmp/storage") %> amazon : service : S3 access_key_id : " " secret_access_key : " " bucket : " " region : " " # e.g. 'us-east-1'
Tell Active Storage which service to use by setting Runway.application.config.active_storage.service
. Considering each environment will likely apply a different service, it is recommended to exercise this on a per-environment basis. To use the disk service from the previous example in the development surround, you would add the following to config/environments/evolution.rb
:
# Store files locally. config . active_storage . service = :local
To utilize the S3 service in product, yous add the post-obit to config/environments/production.rb
:
# Store files on Amazon S3. config . active_storage . service = :amazon
To use the test service when testing, you add the following to config/environments/test.rb
:
# Store uploaded files on the local file system in a temporary directory. config . active_storage . service = :test
Go on reading for more information on the congenital-in service adapters (east.g. Deejay
and S3
) and the configuration they require.
Configuration files that are environment-specific will accept precedence: in product, for example, the config/storage/production.yml
file (if existent) will take precedence over the config/storage.yml
file.
Information technology is recommended to utilize Runway.env
in the bucket names to farther reduce the risk of accidentally destroying production data.
amazon : service : S3 # ... bucket : your_own_bucket-<%= Rail.env %> google : service : GCS # ... bucket : your_own_bucket-<%= Rails.env %> azure : service : AzureStorage # ... container : your_container_name-<%= Rails.env %>
2.1 Deejay Service
Declare a Disk service in config/storage.yml
:
local : service : Disk root : <%= Runway.root.join("storage") %>
2.2 S3 Service (Amazon S3 and S3-compatible APIs)
To connect to Amazon S3, declare an S3 service in config/storage.yml
:
amazon : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " "
Optionally provide client and upload options:
amazon : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " " http_open_timeout : 0 http_read_timeout : 0 retry_limit : 0 upload : server_side_encryption : " " # 'aws:kms' or 'AES256'
Set sensible client HTTP timeouts and retry limits for your application. In certain failure scenarios, the default AWS client configuration may cause connections to be held for up to several minutes and lead to request queuing.
Add together the aws-sdk-s3
jewel to your Gemfile
:
gem "aws-sdk-s3" , require: false
The cadre features of Active Storage require the post-obit permissions: s3:ListBucket
, s3:PutObject
, s3:GetObject
, and s3:DeleteObject
. Public access additionally requires s3:PutObjectAcl
. If y'all have additional upload options configured such equally setting ACLs then additional permissions may exist required.
If you want to use environment variables, standard SDK configuration files, profiles, IAM case profiles or job roles, y'all tin omit the access_key_id
, secret_access_key
, and region
keys in the example above. The S3 Service supports all of the hallmark options described in the AWS SDK documentation.
To connect to an S3-compatible object storage API such as DigitalOcean Spaces, provide the endpoint
:
digitalocean : service : S3 endpoint : https://nyc3.digitaloceanspaces.com access_key_id : ... secret_access_key : ... # ...and other options
At that place are many other options available. Yous can check them in AWS S3 Client documentation.
two.3 Microsoft Azure Storage Service
Declare an Azure Storage service in config/storage.yml
:
azure : service : AzureStorage storage_account_name : " " storage_access_key : " " container : " "
Add together the azure-storage-blob
gem to your Gemfile
:
precious stone "azure-storage-blob" , require: false
2.4 Google Deject Storage Service
Declare a Google Deject Storage service in config/storage.yml
:
google : service : GCS credentials : <%= Rails.root.join("path/to/keyfile.json") %> project : " " bucket : " "
Optionally provide a Hash of credentials instead of a keyfile path:
google : service : GCS credentials : type : " service_account" project_id : " " private_key_id : <%= Rail.application.credentials.dig(:gcs, :private_key_id) %> private_key : <%= Runway.awarding.credentials.dig(:gcs, :private_key).dump %> client_email : " " client_id : " " auth_uri : " https://accounts.google.com/o/oauth2/auth" token_uri : " https://accounts.google.com/o/oauth2/token" auth_provider_x509_cert_url : " https://world wide web.googleapis.com/oauth2/v1/certs" client_x509_cert_url : " " project : " " bucket : " "
Optionally provide a Cache-Control metadata to prepare on uploaded avails:
google : service : GCS ... cache_control : " public, max-historic period=3600"
Optionally use IAM instead of the credentials
when signing URLs. This is useful if you are authenticating your GKE applications with Workload Identity, encounter this Google Cloud blog post for more data.
google : service : GCS ... iam : true
Optionally utilize a specific GSA when signing URLs. When using IAM, the metadata server volition be contacted to become the GSA email, but this metadata server is not always present (e.g. local tests) and you may wish to utilize a non-default GSA.
google : service : GCS ... iam : true gsa_email : " foobar@baz.iam.gserviceaccount.com"
Add together the google-cloud-storage
gem to your Gemfile
:
gem "google-cloud-storage" , "~> 1.eleven" , crave: false
2.5 Mirror Service
You can keep multiple services in sync past defining a mirror service. A mirror service replicates uploads and deletes across two or more subordinate services.
A mirror service is intended to exist used temporarily during a migration betwixt services in production. You tin start mirroring to a new service, copy pre-existing files from the old service to the new, and then become all-in on the new service.
Mirroring is not atomic. It is possible for an upload to succeed on the master service and fail on whatsoever of the subordinate services. Before going all-in on a new service, verify that all files have been copied.
Ascertain each of the services you'd like to mirror equally described above. Reference them by name when defining a mirror service:
s3_west_coast : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " " s3_east_coast : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " " production : service : Mirror primary : s3_east_coast mirrors : - s3_west_coast
Although all secondary services receive uploads, downloads are ever handled by the primary service.
Mirror services are compatible with direct uploads. New files are direct uploaded to the principal service. When a directly-uploaded file is fastened to a record, a background chore is enqueued to copy it to the secondary services.
2.half dozen Public access
By default, Active Storage assumes private access to services. This means generating signed, unmarried-use URLs for blobs. If yous'd rather make blobs publicly accessible, specify public: true
in your app's config/storage.yml
:
gcs : &gcs service : GCS project : " " private_gcs : << : *gcs credentials : <%= Rails.root.join("path/to/private_keyfile.json") %> bucket : " " public_gcs : << : *gcs credentials : <%= Rails.root.bring together("path/to/public_keyfile.json") %> bucket : " " public : true
Make sure your buckets are properly configured for public access. See docs on how to enable public read permissions for Amazon S3, Google Cloud Storage, and Microsoft Azure storage services. Amazon S3 additionally requires that you have the s3:PutObjectAcl
permission.
When converting an existing application to use public: truthful
, make certain to update every private file in the saucepan to be publicly-readable before switching over.
3 Attaching Files to Records
three.1 has_one_attached
The has_one_attached
macro sets upwards a 1-to-i mapping between records and files. Each record can have one file attached to it.
For case, suppose your application has a User
model. If y'all want each user to have an avatar, define the User
model equally follows:
class User < ApplicationRecord has_one_attached :avatar end
or if you are using Track 6.0+, you can run a model generator control like this:
bin / runway generate model User avatar :attachment
Y'all tin can create a user with an avatar:
<%= class . file_field :avatar %>
form SignupController < ApplicationController def create user = User . create! ( user_params ) session [ :user_id ] = user . id redirect_to root_path end private def user_params params . require ( :user ). permit ( :email_address , :countersign , :avatar ) finish end
Phone call avatar.attach
to attach an avatar to an existing user:
user . avatar . adhere ( params [ :avatar ])
Phone call avatar.attached?
to determine whether a particular user has an avatar:
In some cases you might want to override a default service for a specific attachment. You can configure specific services per zipper using the service
option:
class User < ApplicationRecord has_one_attached :avatar , service: :s3 terminate
Yous tin configure specific variants per zipper by calling the variant
method on yielded attachable object:
course User < ApplicationRecord has_one_attached :avatar practise | attachable | attachable . variant :thumb , resize_to_limit: [ 100 , 100 ] finish cease
Call avatar.variant(:pollex)
to become a thumb variant of an avatar:
<%= image_tag user . avatar . variant ( :pollex ) %>
3.2 has_many_attached
The has_many_attached
macro sets upward a one-to-many relationship betwixt records and files. Each tape can have many files attached to it.
For example, suppose your application has a Message
model. If y'all desire each message to take many images, define the Message
model equally follows:
class Message < ApplicationRecord has_many_attached :images terminate
or if yous are using Rails half-dozen.0+, you lot tin can run a model generator command like this:
bin / rails generate model Bulletin images :attachments
You can create a message with images:
class MessagesController < ApplicationController def create message = Bulletin . create! ( message_params ) redirect_to bulletin cease individual def message_params params . require ( :message ). allow ( :title , :content , images: []) end end
Call images.attach
to add together new images to an existing message:
@message . images . adhere ( params [ :images ])
Phone call images.attached?
to determine whether a particular message has whatsoever images:
@message . images . fastened?
Overriding the default service is done the aforementioned style equally has_one_attached
, by using the service
option:
form Message < ApplicationRecord has_many_attached :images , service: :s3 end
Configuring specific variants is done the same mode as has_one_attached
, by calling the variant
method on the yielded attachable object:
course Bulletin < ApplicationRecord has_many_attached :images do | attachable | attachable . variant :thumb , resize_to_limit: [ 100 , 100 ] finish stop
iii.3 Attaching File/IO Objects
Sometimes you need to adhere a file that doesn't get in via an HTTP request. For example, yous may want to adhere a file yous generated on deejay or downloaded from a user-submitted URL. You may also desire to attach a fixture file in a model test. To do that, provide a Hash containing at least an open IO object and a filename:
@message . images . attach ( io: File . open ( '/path/to/file' ), filename: 'file.pdf' )
When possible, provide a content type as well. Active Storage attempts to determine a file's content type from its data. It falls back to the content blazon yous provide if it can't do that.
@bulletin . images . attach ( io: File . open ( '/path/to/file' ), filename: 'file.pdf' , content_type: 'application/pdf' )
You lot tin bypass the content type inference from the data past passing in identify: simulated
forth with the content_type
.
@message . images . attach ( io: File . open up ( '/path/to/file' ), filename: 'file.pdf' , content_type: 'application/pdf' , identify: simulated )
If you don't provide a content blazon and Active Storage can't determine the file's content type automatically, it defaults to awarding/octet-stream.
4 Removing Files
To remove an attachment from a model, phone call purge
on the attachment. If your application is set up to use Agile Chore, removal tin can be done in the background instead by calling purge_later
. Purging deletes the blob and the file from the storage service.
# Synchronously destroy the avatar and actual resources files. user . avatar . purge # Destroy the associated models and actual resources files async, via Active Job. user . avatar . purge_later
v Serving Files
Active Storage supports two means to serve files: redirecting and proxying.
All Active Storage controllers are publicly accessible past default. The generated URLs are hard to approximate, but permanent by design. If your files require a college level of protection consider implementing Authenticated Controllers.
five.1 Redirect fashion
To generate a permanent URL for a blob, you lot tin laissez passer the hulk to the url_for
view helper. This generates a URL with the hulk's signed_id
that is routed to the blob'south RedirectController
url_for ( user . avatar ) # => /rails/active_storage/blobs/:signed_id/my-avatar.png
The RedirectController
redirects to the actual service endpoint. This indirection decouples the service URL from the bodily one, and allows, for example, mirroring attachments in unlike services for high-availability. The redirection has an HTTP expiration of 5 minutes.
To create a download link, use the rails_blob_{path|url}
helper. Using this helper allows you to set the disposition.
rails_blob_path ( user . avatar , disposition: "zipper" )
To prevent XSS attacks, Active Storage forces the Content-Disposition header to "zipper" for some kind of files. To change this behaviour encounter the available configuration options in Configuring Track Applications.
If you need to create a link from outside of controller/view context (Groundwork jobs, Cronjobs, etc.), you lot can access the rails_blob_path
like this:
Rails . awarding . routes . url_helpers . rails_blob_path ( user . avatar , only_path: true )
five.2 Proxy mode
Optionally, files can be proxied instead. This means that your application servers volition download file data from the storage service in response to requests. This can be useful for serving files from a CDN.
You can configure Agile Storage to use proxying by default:
# config/initializers/active_storage.rb Rails . application . config . active_storage . resolve_model_to_route = :rails_storage_proxy
Or if you desire to explicitly proxy specific attachments there are URL helpers you can employ in the form of rails_storage_proxy_path
and rails_storage_proxy_url
.
<%= image_tag rails_storage_proxy_path ( @user . avatar ) %>
v.2.1 Putting a CDN in front end of Active Storage
Additionally, in order to apply a CDN for Agile Storage attachments, you will need to generate URLs with proxy mode then that they are served by your app and the CDN will cache the attachment without whatsoever extra configuration. This works out of the box because the default Active Storage proxy controller sets an HTTP header indicating to the CDN to cache the response.
Y'all should also make sure that the generated URLs employ the CDN host instead of your app host. In that location are multiple ways to reach this, but in general it involves tweaking your config/routes.rb
file so that y'all can generate the proper URLs for the attachments and their variations. As an example, yous could add together this:
# config/routes.rb straight :cdn_image do | model , options | expires_in = options . delete ( :expires_in ) { ActiveStorage . urls_expire_in } if model . respond_to? ( :signed_id ) route_for ( :rails_service_blob_proxy , model . signed_id ( expires_in: expires_in ), model . filename , options . merge ( host: ENV [ 'CDN_HOST' ]) ) else signed_blob_id = model . blob . signed_id ( expires_in: expires_in ) variation_key = model . variation . fundamental filename = model . blob . filename route_for ( :rails_blob_representation_proxy , signed_blob_id , variation_key , filename , options . merge ( host: ENV [ 'CDN_HOST' ]) ) terminate cease
and and then generate routes similar this:
<%= cdn_image_url ( user . avatar . variant ( resize_to_limit: [ 128 , 128 ])) %>
5.3 Authenticated Controllers
All Active Storage controllers are publicly accessible by default. The generated URLs use a plain signed_id
, making them hard to gauge but permanent. Anyone that knows the blob URL will be able to access information technology, even if a before_action
in your ApplicationController
would otherwise require a login. If your files require a higher level of protection, you can implement your own authenticated controllers, based on the ActiveStorage::Blobs::RedirectController
, ActiveStorage::Blobs::ProxyController
, ActiveStorage::Representations::RedirectController
and ActiveStorage::Representations::ProxyController
To simply permit an account to access their own logo you could exercise the following:
# config/routes.rb resource :business relationship practice resources :logo finish
# app/controllers/logos_controller.rb class LogosController < ApplicationController # Through ApplicationController: # include Authenticate, SetCurrentAccount def show redirect_to Current . account . logo . url end end
<%= image_tag account_logo_path %>
And then you lot might want to disable the Active Storage default routes with:
config . active_storage . draw_routes = false
to preclude files being accessed with the publicly accessible URLs.
half-dozen Downloading Files
Sometimes you need to process a blob after information technology's uploaded—for instance, to convert it to a unlike format. Utilize the attachment's download
method to read a blob's binary information into retentivity:
binary = user . avatar . download
Yous might desire to download a hulk to a file on disk so an external program (eastward.m. a virus scanner or media transcoder) can operate on it. Utilise the attachment's open
method to download a hulk to a tempfile on disk:
bulletin . video . open do | file | system '/path/to/virus/scanner' , file . path # ... end
It'southward of import to know that the file is not yet bachelor in the after_create
callback but in the after_create_commit
only.
7 Analyzing Files
Active Storage analyzes files once they've been uploaded by queuing a job in Active Job. Analyzed files will store additional data in the metadata hash, including analyzed: true
. You tin bank check whether a hulk has been analyzed by calling analyzed?
on information technology.
Prototype analysis provides width
and elevation
attributes. Video analysis provides these, likewise every bit duration
, angle
, display_aspect_ratio
, and video
and audio
booleans to indicate the presence of those channels. Audio analysis provides duration
and bit_rate
attributes.
8 Displaying Images, Videos, and PDFs
Active Storage supports representing a multifariousness of files. You lot tin telephone call representation
on an zipper to display an image variant, or a preview of a video or PDF. Before calling representation
, cheque if the attachment can be represented by calling representable?
. Some file formats tin't be previewed by Active Storage out of the box (e.g. Discussion documents); if representable?
returns imitation yous may desire to link to the file instead.
<ul> <% @message . files . each practise | file | %> <li> <% if file . representable? %> <%= image_tag file . representation ( resize_to_limit: [ 100 , 100 ]) %> <% else %> <%= link_to rails_blob_path ( file , disposition: "attachment" ) do %> <%= image_tag "placeholder.png" , alt: "Download file" %> <% stop %> <% end %> </li> <% cease %> </ul>
Internally, representation
calls variant
for images, and preview
for previewable files. You can also phone call these methods directly.
8.1 Lazy vs Immediate Loading
By default, Active Storage will process representations lazily. This code:
image_tag file . representation ( resize_to_limit: [ 100 , 100 ])
Will generate an <img>
tag with the src
pointing to the ActiveStorage::Representations::RedirectController
. The browser will make a asking to that controller, which will return a 302
redirect to the file on the remote service (or in proxy fashion, return the file contents). Loading the file lazily allows features like single apply URLs to work without slowing down your initial folio loads.
This works fine for almost cases.
If you want to generate URLs for images immediately, you can call .processed.url
:
image_tag file . representation ( resize_to_limit: [ 100 , 100 ]). processed . url
The Active Storage variant tracker improves performance of this, past storing a tape in the database if the requested representation has been processed before. Thus, the above code will only make an API phone call to the remote service (e.g. S3) one time, and once a variant is stored, will employ that. The variant tracker runs automatically, but can be disabled through config.active_storage.track_variants
.
If yous're rendering lots of images on a page, the higher up example could result in Due north+1 queries loading all the variant records. To avoid these Due north+1 queries, use the named scopes on ActiveStorage::Zipper
.
bulletin . images . with_all_variant_records . each do | file | image_tag file . representation ( resize_to_limit: [ 100 , 100 ]). processed . url end
8.2 Transforming Images
Transforming images allows y'all to brandish the image at your choice of dimensions. To create a variation of an image, telephone call variant
on the zipper. You lot can pass whatsoever transformation supported past the variant processor to the method. When the browser hits the variant URL, Active Storage volition lazily transform the original hulk into the specified format and redirect to its new service location.
<%= image_tag user . avatar . variant ( resize_to_limit: [ 100 , 100 ]) %>
If a variant is requested, Active Storage volition automatically apply transformations depending on the image'south format:
-
Content types that are variable (as dictated by
config.active_storage.variable_content_types
) and not considered spider web images (as dictated byconfig.active_storage.web_image_content_types
), will exist converted to PNG. -
If
quality
is not specified, the variant processor'south default quality for the format will exist used.
Active Storage can use either Vips or MiniMagick as the variant processor. The default depends on your config.load_defaults
target version, and the processor can exist changed by setting config.active_storage.variant_processor
.
The two processors are not fully compatible, so when migrating an existing application between MiniMagick and Vips, some changes have to be fabricated if using options that are format specific:
<!-- MiniMagick --> <%= image_tag user . avatar . variant ( resize_to_limit: [ 100 , 100 ], format: :jpeg , sampling_factor: "iv:2:0" , strip: truthful , interlace: "JPEG" , colorspace: "sRGB" , quality: 80 ) %> <!-- Vips --> <%= image_tag user . avatar . variant ( resize_to_limit: [ 100 , 100 ], format: :jpeg , saver: { subsample_mode: "on" , strip: true , interlace: true , quality: lxxx }) %>
8.three Previewing Files
Some non-image files tin can be previewed: that is, they can be presented as images. For example, a video file can be previewed by extracting its offset frame. Out of the box, Active Storage supports previewing videos and PDF documents. To create a link to a lazily-generated preview, utilize the attachment's preview
method:
<%= image_tag message . video . preview ( resize_to_limit: [ 100 , 100 ]) %>
To add support for another format, add your ain previewer. See the ActiveStorage::Preview
documentation for more than information.
nine Direct Uploads
Active Storage, with its included JavaScript library, supports uploading directly from the customer to the cloud.
ix.ane Usage
-
Include
activestorage.js
in your application'due south JavaScript bundle.Using the nugget pipeline:
//= require activestorage
Using the npm package:
import * as ActiveStorage from " @rail/activestorage " ActiveStorage . showtime ()
-
Add
direct_upload: true
to your file field:<%= class . file_field :attachments , multiple: true , direct_upload: truthful %>
Or, if you aren't using a
FormBuilder
, add together the data attribute direct:<input type= file information-straight-upload-url= " <%= rails_direct_uploads_url %> " />
-
Configure CORS on third-political party storage services to allow straight upload requests.
-
That's it! Uploads begin upon grade submission.
9.2 Cross-Origin Resources Sharing (CORS) configuration
To make straight uploads to a third-party service work, you lot'll need to configure the service to allow cross-origin requests from your app. Consult the CORS documentation for your service:
- S3
- Google Deject Storage
- Azure Storage
Take intendance to allow:
- All origins from which your app is accessed
- The
PUT
request method - The following headers:
-
Origin
-
Content-Type
-
Content-MD5
-
Content-Disposition
(except for Azure Storage) -
10-ms-hulk-content-disposition
(for Azure Storage only) -
ten-ms-hulk-type
(for Azure Storage simply) -
Cache-Control
(for GCS, merely ifcache_control
is set)
-
No CORS configuration is required for the Deejay service since it shares your app'due south origin.
9.ii.1 Case: S3 CORS configuration
[ { "AllowedHeaders" : [ "*" ], "AllowedMethods" : [ "PUT" ], "AllowedOrigins" : [ "https://www.example.com" ], "ExposeHeaders" : [ "Origin" , "Content-Blazon" , "Content-MD5" , "Content-Disposition" ], "MaxAgeSeconds" : 3600 } ]
9.2.two Instance: Google Cloud Storage CORS configuration
[ { "origin" : [ "https://www.example.com" ], "method" : [ "PUT" ], "responseHeader" : [ "Origin" , "Content-Type" , "Content-MD5" , "Content-Disposition" ], "maxAgeSeconds" : 3600 } ]
9.2.iii Example: Azure Storage CORS configuration
<Cors> <CorsRule> <AllowedOrigins>https://world wide web.example.com</AllowedOrigins> <AllowedMethods>PUT</AllowedMethods> <AllowedHeaders>Origin, Content-Type, Content-MD5, ten-ms-hulk-content-disposition, x-ms-blob-type</AllowedHeaders> <MaxAgeInSeconds>3600</MaxAgeInSeconds> </CorsRule> </Cors>
nine.3 Direct upload JavaScript events
Event name | Result target | Event information (event.detail ) | Description |
---|---|---|---|
straight-uploads:start | <form> | None | A form containing files for straight upload fields was submitted. |
directly-upload:initialize | <input> | {id, file} | Dispatched for every file afterwards form submission. |
directly-upload:outset | <input> | {id, file} | A direct upload is starting. |
direct-upload:earlier-blob-request | <input> | {id, file, xhr} | Earlier making a request to your application for direct upload metadata. |
direct-upload:before-storage-asking | <input> | {id, file, xhr} | Before making a request to store a file. |
direct-upload:progress | <input> | {id, file, progress} | As requests to store files progress. |
directly-upload:fault | <input> | {id, file, error} | An error occurred. An warning will display unless this event is canceled. |
straight-upload:end | <input> | {id, file} | A directly upload has ended. |
straight-uploads:end | <form> | None | All direct uploads take ended. |
nine.iv Example
You tin can utilise these events to show the progress of an upload.
To prove the uploaded files in a class:
// direct_uploads.js addEventListener ( " direct-upload:initialize " , issue => { const { target , detail } = event const { id , file } = item target . insertAdjacentHTML ( " beforebegin " , ` <div id="direct-upload- ${ id } " course="directly-upload direct-upload--pending"> <div id="direct-upload-progress- ${ id } " class="direct-upload__progress" style="width: 0%"></div> <span form="direct-upload__filename"></bridge> </div> ` ) target . previousElementSibling . querySelector ( `.direct-upload__filename` ). textContent = file . name }) addEventListener ( " direct-upload:commencement " , effect => { const { id } = event . detail const element = document . getElementById ( `directly-upload- ${ id } ` ) element . classList . remove ( " direct-upload--pending " ) }) addEventListener ( " direct-upload:progress " , result => { const { id , progress } = effect . particular const progressElement = document . getElementById ( `direct-upload-progress- ${ id } ` ) progressElement . style . width = ` ${ progress } %` }) addEventListener ( " directly-upload:error " , issue => { consequence . preventDefault () const { id , fault } = event . item const element = document . getElementById ( `straight-upload- ${ id } ` ) chemical element . classList . add ( " directly-upload--error " ) element . setAttribute ( " championship " , fault ) }) addEventListener ( " directly-upload:end " , event => { const { id } = event . item const chemical element = certificate . getElementById ( `direct-upload- ${ id } ` ) element . classList . add ( " direct-upload--complete " ) })
Add styles:
/* direct_uploads.css */ .direct-upload { brandish : inline-block ; position : relative ; padding : 2px 4px ; margin : 0 3px 3px 0 ; edge : 1px solid rgba ( 0 , 0 , 0 , 0.3 ); border-radius : 3px ; font-size : 11px ; line-tiptop : 13px ; } .direct-upload--awaiting { opacity : 0.6 ; } .direct-upload__progress { position : absolute ; top : 0 ; left : 0 ; bottom : 0 ; opacity : 0.2 ; background : #0076ff ; transition : width 120ms ease-out , opacity 60ms 60ms ease-in ; transform : translate3d ( 0 , 0 , 0 ); } .direct-upload--consummate .direct-upload__progress { opacity : 0.iv ; } .direct-upload--fault { border-colour : red ; } input [ type = file ][ data-straight-upload-url ][ disabled ] { display : none ; }
9.5 Integrating with Libraries or Frameworks
If you want to use the Directly Upload feature from a JavaScript framework, or you lot want to integrate custom drag and drop solutions, y'all tin can utilize the DirectUpload
class for this purpose. Upon receiving a file from your library of choice, instantiate a DirectUpload and call its create method. Create takes a callback to invoke when the upload completes.
import { DirectUpload } from " @rail/activestorage " const input = document . querySelector ( ' input[type=file] ' ) // Demark to file driblet - use the ondrop on a parent element or utilise a // library like Dropzone const onDrop = ( effect ) => { event . preventDefault () const files = event . dataTransfer . files ; Array . from ( files ). forEach ( file => uploadFile ( file )) } // Bind to normal file selection input . addEventListener ( ' change ' , ( effect ) => { Array . from ( input . files ). forEach ( file => uploadFile ( file )) // you might articulate the selected files from the input input . value = zip }) const uploadFile = ( file ) => { // your form needs the file_field direct_upload: true, which // provides data-direct-upload-url const url = input . dataset . directUploadUrl const upload = new DirectUpload ( file , url ) upload . create (( fault , hulk ) => { if ( error ) { // Handle the error } else { // Add an accordingly-named hidden input to the form with a // value of blob.signed_id then that the blob ids volition be // transmitted in the normal upload menses const hiddenField = certificate . createElement ( ' input ' ) hiddenField . setAttribute ( " type " , " hidden " ); hiddenField . setAttribute ( " value " , blob . signed_id ); hiddenField . name = input . name document . querySelector ( ' form ' ). appendChild ( hiddenField ) } }) }
If you need to track the progress of the file upload, you lot can pass a third parameter to the DirectUpload
constructor. During the upload, DirectUpload will call the object's directUploadWillStoreFileWithXHR
method. Y'all can and then demark your own progress handler on the XHR.
import { DirectUpload } from " @rails/activestorage " class Uploader { constructor ( file , url ) { this . upload = new DirectUpload ( this . file , this . url , this ) } upload ( file ) { this . upload . create (( fault , blob ) => { if ( error ) { // Handle the fault } else { // Add together an accordingly-named subconscious input to the class // with a value of blob.signed_id } }) } directUploadWillStoreFileWithXHR ( request ) { request . upload . addEventListener ( " progress " , event => this . directUploadDidProgress ( consequence )) } directUploadDidProgress ( event ) { // Utilise event.loaded and event.total to update the progress bar } }
Using Direct Uploads can sometimes result in a file that uploads, but never attaches to a record. Consider purging unattached uploads.
10 Testing
Employ fixture_file_upload
to test uploading a file in an integration or controller test. Rail handles files similar any other parameter.
grade SignupController < ActionDispatch :: IntegrationTest exam "tin sign up" practise post signup_path , params: { name: "David" , avatar: fixture_file_upload ( "david.png" , "image/png" ) } user = User . guild ( :created_at ). last affirm user . avatar . attached? end end
10.1 Discarding files created during tests
10.i.1 System tests
Arrangement tests clean upward exam information by rolling back a transaction. Because destroy
is never called on an object, the attached files are never cleaned upwardly. If you lot want to clear the files, you can do it in an after_teardown
callback. Doing it here ensures that all connections created during the test are consummate and yous won't receive an error from Agile Storage maxim it can't find a file.
class ApplicationSystemTestCase < ActionDispatch :: SystemTestCase # ... def after_teardown super FileUtils . rm_rf ( ActiveStorage :: Hulk . service . root ) end # ... terminate
If you're using parallel tests and the DiskService
, you should configure each process to utilise its own binder for Active Storage. This way, the teardown
callback will only delete files from the relevant process' tests.
course ApplicationSystemTestCase < ActionDispatch :: SystemTestCase # ... parallelize_setup do | i | ActiveStorage :: Blob . service . root = " #{ ActiveStorage :: Blob . service . root } - #{ i } " stop # ... finish
If your system tests verify the deletion of a model with attachments and you're using Agile Job, prepare your test surround to use the inline queue adapter and then the purge task is executed immediately rather at an unknown time in the futurity.
# Use inline job processing to make things happen immediately config . active_job . queue_adapter = :inline
10.ane.2 Integration tests
Similarly to Arrangement Tests, files uploaded during Integration Tests will not be automatically cleaned up. If y'all want to clear the files, you can do it in an teardown
callback.
class ActionDispatch::IntegrationTest def after_teardown super FileUtils . rm_rf ( ActiveStorage :: Blob . service . root ) end end
If y'all're using parallel tests and the Deejay service, you should configure each process to employ its ain folder for Active Storage. This manner, the teardown
callback will only delete files from the relevant procedure' tests.
class ActionDispatch::IntegrationTest parallelize_setup practice | i | ActiveStorage :: Blob . service . root = " #{ ActiveStorage :: Hulk . service . root } - #{ i } " end cease
10.two Adding attachments to fixtures
You can add together attachments to your existing fixtures. Offset, you'll want to create a separate storage service:
# config/storage.yml test_fixtures : service : Disk root : <%= Runway.root.join("tmp/storage_fixtures") %>
This tells Active Storage where to "upload" fixture files to, so it should be a temporary directory. By making it a unlike directory to your regular test
service, you can split fixture files from files uploaded during a test.
Next, create fixture files for the Active Storage classes:
# active_storage/attachments.yml david_avatar : name : avatar record : david (User) blob : david_avatar_blob
# active_storage/blobs.yml david_avatar_blob : <%= ActiveStorage::FixtureSet.blob filename : " david.png" , service_name : " test_fixtures" % >
And then put a file in your fixtures directory (the default path is test/fixtures/files
) with the corresponding filename. See the ActiveStorage::FixtureSet
docs for more data.
Once everything is set up, you'll be able to access attachments in your tests:
course UserTest < ActiveSupport :: TestCase def test_avatar avatar = users ( :david ). avatar assert avatar . attached? assert_not_nil avatar . download assert_equal 1000 , avatar . byte_size terminate finish
10.2.1 Cleaning upward fixtures
While files uploaded in tests are cleaned up at the end of each exam, you only need to make clean up fixture files once: when all your tests complete.
If you're using parallel tests, call parallelize_teardown
:
form ActiveSupport::TestCase # ... parallelize_teardown do | i | FileUtils . rm_rf ( ActiveStorage :: Hulk . services . fetch ( :test_fixtures ). root ) end # ... cease
If you're not running parallel tests, utilize Minitest.after_run
or the equivalent for your exam framework (eastward.g. later on(:suite)
for RSpec):
# test_helper.rb Minitest . after_run do FileUtils . rm_rf ( ActiveStorage :: Blob . services . fetch ( :test_fixtures ). root ) end
eleven Implementing Support for Other Cloud Services
If you need to back up a deject service other than these, y'all will need to implement the Service. Each service extends ActiveStorage::Service
by implementing the methods necessary to upload and download files to the cloud.
12 Purging Unattached Uploads
There are cases where a file is uploaded but never attached to a record. This can happen when using Straight Uploads. You lot can query for unattached records using the unattached telescopic. Beneath is an example using a custom rake task.
namespace :active_storage do desc "Purges unattached Active Storage blobs. Run regularly." task purge_unattached: :surroundings do ActiveStorage :: Hulk . unattached . where ( "active_storage_blobs.created_at <= ?" , ii . days . ago ). find_each ( & :purge_later ) terminate stop
The query generated by ActiveStorage::Blob.unattached
can exist slow and potentially disruptive on applications with larger databases.
Feedback
You lot're encouraged to assist improve the quality of this guide.
Delight contribute if you encounter whatever typos or factual errors. To go started, you can read our documentation contributions section.
Y'all may too notice incomplete content or stuff that is not upwards to date. Please do add together any missing documentation for main. Make certain to check Edge Guides first to verify if the bug are already fixed or non on the chief co-operative. Bank check the Ruby on Runway Guides Guidelines for style and conventions.
If for whatever reason yous spot something to ready just cannot patch it yourself, delight open an event.
And final but not to the lowest degree, any kind of discussion regarding Reddish on Rails documentation is very welcome on the rubyonrails-docs mailing list.
Source: https://edgeguides.rubyonrails.org/active_storage_overview.html
0 Response to "Usint Active Storage Direct Upload Amazon Web Services React"
Post a Comment