This is a list of things you, as a developer, should be aware about when working with a solution hosted at Chainbox.
It is probably also good practice to follow this on sites hosted outside Chainbox.
We distinguish between two types of files on the server.
Website (Code, Javascript, CSS, DLL's, Views/templates etc.)
Temporary Data (cache-files, Lucene-indexes etc.)
The website types of files are read-only and can in general only be modified via GIT/Deployment.
If you have a need to store temporary files they should probably be placed in the following directories which are virtual directories mapped to writeable storage.
App_Data/ for content that should be proxied through the application to be accessed.
Beware that the data is not backed up and can disappear during service update/maintanance/website deployment etc. Data stored in these folders are only available to the local server and can't be reached from other instances (load-balanced/fail-over), so in general the website should be the one to populate the data at any time.
Files that should be saved persistently should either be stored in Umbraco-media, chainbox.io CDN or external service of your own choosing.
The master-branch should at any point in time be ready to rolled out on the site / in production.
We might deploy/roll out from master without any warning - i.e. in relation to server updates, moving sites or other maintance related tasks.
If somewhere in the "backend"-code connections to external services are established and they only support the legacy IPv4 protocol, be aware that our servers only have IPv6 addresses and uses NAT64 technology for communicating with the IPv4 internet.
Therefore, make sure not to hardcode IPv4 addresses in the solutions - instead always use DNS resolution to allow NAT64/DNS64 to make the necessary conversions, if you are not able to make the service exposed via IPv6.
Also beware that if the domain is DNSSEC-enabled it can in rare cases cause problems (https://blog.apnic.net/2016/06/09/lets-talk-ipv6-dns64-dnssec/)
We don't recommend implementing "scheduled tasks" or other long-running requests in the website code - i.e. like generating feeds, sitemaps etc. as webservers are not designed for that.
Doing this can lead to the site being slow, being CPU-throttled, sudden restarts/out of RAM or even unavailable for a period of time.
Long-running tasks are better suited for running in dedicated workers.
Maximum total size of all headers in a response must be no more than 8k.
Session storage is in-process and this not shared among servers, so session should be used in a way that takes this into consideration since requests can hit different instances of the application across servers/nodes.