Before each questiona and answer the developers of the software have a few general remarks important to state:
This depends on which solution the client choses. If they go for hosting on yours/our servers, they will need an account. If they go for hosting on their own servers within their firewall, we need to install the software on their computers using containers.
See above re. hosting. It will require some computational power to develop the first model (similar to a powerful desktop computer) while running the forecasts live will require little from the computer (it can easily be run on a standard computer). If hosted on yours/our servers, a standard internet connection is more than enough. Basically, we just need to move numbers similar to one A4 page every 15 minutes and one simple image whenever forecasts are requested. Chrome is the best browser for our solution.
It will run on one computer or a server but can be accessed on an unlimited number of connections. If you update or change computers, the software will have to be moved, but that’s not an issue.
There are currently no plans for a dedicated app, but the graphs will be scalable and thus able to show on any device.
The system only requires data on historic arrivals and updated data preferably every hour to function optimally, delivered from the customer. If other data sources than our standard sources are to be integrated, the system will require access to these, either locally or through the internet. The system can handle both CSV and Excel as well as JSON. However, it is preferable that they follow a prespecified structure. We will provide a description of the format.
This is difficult to answer. Each additional data source requires specific analyses. As stated above, we only require historic and live data from the customer. That said, sometimes additional data sources can provide additional information that isn’t captured by the standard data sources and if the customer has access to such sources, then we can integrate these into the forecasts applying the same formats as the influx data.
This is being implemented at the time of writing. For now, it is a username/password approach with e-mail validation. The password is encrypted, and the transport of information is conducted with a https protocol using TLS 1.3 encryption. The recovery process has yet to be implemented.
In a department with approx. 95 arrivals per day, we have, over two years, had an accuracy of +/- 1 patient per hour 95% of the time (measured over 8 hours). The system actively monitors accuracy and will trigger a recalibration if the accuracy decreases beyond a level decided in consultation with the customer.
The system can be hosted at the customer, on your servers, on ours (in Denmark) or in a data center in Germany. The amount of data transferred between the customer and the servers is so little that it really does not matter where the servers are located.
While the system in principle can be set up to have many different user levels, only very few make sense. We suggest that the customer apply three user levels: (1) A primary (admin) user who is finance responsible and responsible for the remaining users for the customer, (2) super user, who can select and set up forecasts, and (3) guest user (or viewer) who can see the customer’s forecasts.
This depends on whether it is set up on our hardware or the customer’s hardware. In the latter case it is the customer’s firewall that provides security. If it is set up on our hardware, we apply a zero-trust approach (see https://www.defined.net/) for intra-computer communication and only the front-end is accessible from external systems. Everything else is blocked by the firewall. In addition, the system is set up in containers where each container can only be accessed through specific ports (e.g. port 443 for https) from external users. The containers are set up without root (administrative) access such that the privileges are minimized.
We prefer all maintenance to be performed by us/you to ensure optimal quality. We offer maintenance within normal business hours and additional support can be purchased for a fee.
If run on our servers, this is done by us. If run on servers at the customer, this is up to them. We have no specific requirements for the components as we can run the system on any standard Linux installation (preferably).
Preferably Linux (Debian based such as Ubuntu Server 22.04 LTS or above) but we also support Windows to run the full system. To view the predictions from the forecast models only access to a browser is required.
The backend system is developed in the C++ programming language while the frontend components are developed in Javascript using NodeJS (see https://nodejs.org/) as execution engine. The library libTorch (the C++ variant of PyTorch framework, see https://pytorch.org/) is used in the implementation of machine learning models.
The system is a collection of microservices that can be executed in a containerized environment, such as Docker (https://www.docker.com/) or Podman (https://podman.io/). As such each of the microservices can be replaced or updated without affecting the other
microservices as long as the API remains unchanged. This allows for a continual improvement of the overall system. In addition, the system can be extended with
components.
These are extracted from the national agencies or equivalent. We only use data from national or international agencies or equivalent.
PraeSight (our short-term forecast) currently forecasts 12 hours ahead in time, but we plan to expand this. PraePlan (our long-term forecast) will forecast attendance months into the future. While PraeSight is very accurate (see above), the accuracy of PraePlan is dependent on the duration of historic data from the customer. Experience tells us that we need at least 3-4 years of data, and preferably more.
To date, we get it right 95% of the time but of cause cannot guarantee that this will always be the case. We always provide data on confidence intervals of our forecasts so the customer can make their own assessment of the reliability of the forecasts. If the confidence intervals are very wide, the forecasts should not be trusted too much.