F.A.Q.
Frequently Asked Questions

Questions we get asked when introducing the software to potential clients and partners:

Before each questiona and answer the developers of the software have a few general remarks important to state:

  1. We can provide one of two variants (1) hosted at yours/our hardware, (2) hosted locally at the customer’s hardware within their firewall. We prefer option (1) as it is significantly easier to manage.
  2. The data we need is primarily (1) influx data for what we need to forecast (e.g. no. of patients per hour), and (2) events that may affect the influx at the customer. That may be changes in caption area, changes in the organization at the customer, local events such as festivals (though the latter is not required up front).
  3. Note that none of the data we require for the forecast models is person-specific, i.e., it is only statistical data of the influx we require and therefore we do not need individual patient health data from the electronic patient record systems.
Will I need to create an account, or will it be provided to me?

If an account needs to be created, what is the process? Will I receive a pre-generated account with login credentials, or will I be guided to set it up on my own?

This depends on which solution the client choses. If they go for hosting on yours/our servers, they will need an account. If they go for hosting on their own servers within their firewall, we need to install the software on their computers using containers.

If installation is required, what are the system requirements? If it’s cloud-based, what kind of internet connectivity or browser support is necessary?

See above re. hosting. It will require some computational power to develop the first model (similar to a powerful desktop computer) while running the forecasts live will require little from the computer (it can easily be run on a standard computer). If hosted on yours/our servers, a standard internet connection is more than enough. Basically, we just need to move numbers similar to one A4 page every 15 minutes and one simple image whenever forecasts are requested. Chrome is the best browser for our solution.

Is there a specific number of activations allowed per license? Can the license be transferred if I switch devices?

It will run on one computer or a server but can be accessed on an unlimited number of connections. If you update or change computers, the software will have to be moved, but that’s not an issue.

If it’s compatible, are there planned any dedicated apps for Android and iOS? Does the mobile/tablet version have full functionality, or are some features limited?

There are currently no plans for a dedicated app, but the graphs will be scalable and thus able to show on any device.

Does the system allow for importing hospital-specific data, such as patient records, staffing statistics, or historical event logs? What formats (e.g., CSV, Excel, JSON) or technical requirements must be met for seamless integration?

The system only requires data on historic arrivals and updated data preferably every hour to function optimally, delivered from the customer. If other data sources than our standard sources are to be integrated, the system will require access to these, either locally or through the internet. The system can handle both CSV and Excel as well as JSON. However, it is preferable that they follow a prespecified structure. We will provide a description of the format.

Will the integration of custom statistics improve the accuracy of predictions, tailor recommendations to our needs, or provide more actionable insights? Are there specific data points that are particularly important for optimal performance?

This is difficult to answer. Each additional data source requires specific analyses. As stated above, we only require historic and live data from the customer. That said, sometimes additional data sources can provide additional information that isn’t captured by the standard data sources and if the customer has access to such sources, then we can integrate these into the forecasts applying the same formats as the influx data.

Does the system use advanced encryption, multi-factor authentication (MFA), or other protocols to protect user accounts? How are account recovery processes handled?

This is being implemented at the time of writing. For now, it is a username/password approach with e-mail validation. The password is encrypted, and the transport of information is conducted with a https protocol using TLS 1.3 encryption. The recovery process has yet to be implemented.

What is the margin of error for the data? Are there mechanisms in place to validate or crosscheck its accuracy?

In a department with approx. 95 arrivals per day, we have, over two years, had an accuracy of +/- 1 patient per hour 95% of the time (measured over 8 hours). The system actively monitors accuracy and will trigger a recalibration if the accuracy decreases beyond a level decided in consultation with the customer.

Is the server hosted locally, in a private data center, or by a cloud provider like AWS, Azure, or Google Cloud? How does the location affect latency and data privacy?

The system can be hosted at the customer, on your servers, on ours (in Denmark) or in a data center in Germany. The amount of data transferred between the customer and the servers is so little that it really does not matter where the servers are located.

What are the available user levels (e.g., admin, standard user, guest)? Can roles be customized to fit organizational needs?

While the system in principle can be set up to have many different user levels, only very few make sense. We suggest that the customer apply three user levels: (1) A primary (admin) user who is finance responsible and responsible for the remaining users for the customer, (2) super user, who can select and set up forecasts, and (3) guest user (or viewer) who can see the customer’s forecasts.

Are firewalls managed on-site by the user, or are they handled by the service provider? What specific firewall technologies are used?

This depends on whether it is set up on our hardware or the customer’s hardware. In the latter case it is the customer’s firewall that provides security. If it is set up on our hardware, we apply a zero-trust approach (see https://www.defined.net/) for intra-computer communication and only the front-end is accessible from external systems. Everything else is blocked by the firewall. In addition, the system is set up in containers where each container can only be accessed through specific ports (e.g. port 443 for https) from external users. The containers are set up without root (administrative) access such that the privileges are minimized.

Is maintenance handled by an in-house IT team, the vendor, or a third-party service? What is the response time for maintenance requests?

We prefer all maintenance to be performed by us/you to ensure optimal quality. We offer maintenance within normal business hours and additional support can be purchased for a fee.

Are spare parts included in the warranty or support agreement? How quickly can they be delivered and installed?

If run on our servers, this is done by us. If run on servers at the customer, this is up to them. We have no specific requirements for the components as we can run the system on any standard Linux installation (preferably).

Does it support all major operating systems, such as Windows, macOS, and Linux? Are there any specific versions or updates required?

Preferably Linux (Debian based such as Ubuntu Server 22.04 LTS or above) but we also support Windows to run the full system. To view the predictions from the forecast models only access to a browser is required.

What technologies (e.g., programming languages, frameworks) form the system’s backbone? Does it have modular architecture or APIs that allow integration with new tools or advancements?

The backend system is developed in the C++ programming language while the frontend components are developed in Javascript using NodeJS (see https://nodejs.org/) as execution engine. The library libTorch (the C++ variant of PyTorch framework, see https://pytorch.org/) is used in the implementation of machine learning models.

The system is a collection of microservices that can be executed in a containerized environment, such as Docker (https://www.docker.com/) or Podman (https://podman.io/). As such each of the microservices can be replaced or updated without affecting the other
microservices as long as the API remains unchanged. This allows for a continual improvement of the overall system. In addition, the system can be extended with
components.

What methods or algorithms are used to ensure prediction accuracy? Is the data sourced from reliable databases or live feeds?

These are extracted from the national agencies or equivalent. We only use data from national or international agencies or equivalent.

Does the system have a time limit on forecasting (e.g., days, months, years)? How does accuracy change over longer prediction periods?

PraeSight (our short-term forecast) currently forecasts 12 hours ahead in time, but we plan to expand this. PraePlan (our long-term forecast) will forecast attendance months into the future. While PraeSight is very accurate (see above), the accuracy of PraePlan is dependent on the duration of historic data from the customer. Experience tells us that we need at least 3-4 years of data, and preferably more.

What is the margin of error or uncertainty level in the predictions? Are there mechanisms to highlight or address potential inaccuracies?

To date, we get it right 95% of the time but of cause cannot guarantee that this will always be the case. We always provide data on confidence intervals of our forecasts so the customer can make their own assessment of the reliability of the forecasts. If the confidence intervals are very wide, the forecasts should not be trusted too much.