Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The GovStack initiative aims to build a common understanding and technical practice on fundamental reusable and interoperable digital components, which we collectively refer to as Building Blocks. Our effort is expert-driven and community-based, and includes the participation of multiple stakeholders to bring together expertise for strengthening a government's cross-agency architecture view.
Our focus is to enable countries to kickstart their digital transformation journey by adopting, deploying, and scaling digital government services. Through the digital "building blocks" approach, governments can easily create or modify their digital platforms, services, and applications by also simplifying cost, time, and resource requirements.
Please note we have a code of conduct that applies to all interactions with the project.
The GovStack community understands that there are a wide variety of people from different backgrounds that have valuable insight into the work GovStack is doing. Therefore, we provide multiple ways in which people from those different backgrounds can interact and give us their insight. Here are three:
To ensure giving feedback on the specifications is as accesible as possible to the widest audience, we have created an online form that will take your feeback and direct it to the right people within the project automatically.
In the navigation menu above, click the Give Feedback link
Enter your contact details so that people in the project can follow up with further questions on your feedback
Note that the page you were browsing has already been entered into the form on your behalf. If this is not the page on which your feedback is relevant, please edit the link as needed
Please give your feedback, remembering to describe the problem as you see it, giving one or more alternative suggestions if you have them, and desccribe why the change is required
On submission, the form will reply with a link to a Jira issue we have automatically created on your behalf. This is where the GovStack project team will work on the feedback. You are welcome and encouraged to register and participate in the discussion in the comments.
Some members of the community will have experience using git to directly interact with specification content and you may do so with the following process.
Note that the content in the GovStack specifications follows the Markdown standards applied by the GitBook tool.
Check our issue queue first to see if anyone else has already reported this issue.
Create a new issue and tell us about the problem that you see.
The maintainer of this section will be alerted and work with you to decide what changes should be made
If invited to do so, create a pull request in the appropriate GovStack GitHub repository, including the issue number and short description in the pull request title, for example:
{Jira issue number} - {short description of the change}
- an example would be gov-001 - Adding contribution information
.
Note that pull requests that do not mention an associated issue number will not be alerted to our maintainers
Once the issue describes the reason for a change and links to the change you propose, a maintainer for that building block is alerted
The maintainer will work with you to ensure that the change meets our standards for inclusion
We value all contributions to the project and, even if the change is not accepted, we always strive to give feedback that helps you understand the decisions taken.
The following diagram provides an example of a GovStack implementation
This digram shows an example of what a GovStack implementation may look like in practice. Several concepts that are important to GovStack are shown in this diagram:
A GovStack implementation may consist of multiple ‘applications’, each serving a distinct purpose. The value that GovStack provides is that these applications do not have to be developed from scratch, but rather leverage core functionalities that are provided by various Building Blocks. And these Building Blocks may be used in multiple applications.
Applications may access outside services through the Information Mediator. Access to these services are configured within the Information Mediator per organization. One application may have permission to access data provided by a particular ministry that another application may not access.
A GovStack application can be used by different types of users. The roles and permissions for various user groups must be managed by the application itself (using )
Building Blocks may be based on existing applications or Digital Public Goods (DPGs). These DPGs may have an API that conforms with the GovStack API specification for that Building Block. If not, an can be used to map the existing API to the GovStack API
The Application frontend and backend may use any mechanism to communicate (REST, GraphQL, etc). However, all GovStack API calls should be done using standard REST protocols.
In a GovStack implementation, there are several different types of components. In addition, there are components that must be developed to support the testing and compliance process for a particular building block. This document provides a definition of these various components as well as detailing which components are generic (may be used for multiple use cases) and which are specific to a particular use case. This document will also flag any components that are specific to the GovStack sandbox/demonstration platform.
Application Frontend. For a particular use case, a user interface will be implemented to provide necessary information to the user and collect any needed information from the user.
Application Backend. For a use case, the application backend will manage the flow and business logic needed. The backend will access any local data and make calls to GovStack Building Blocks as needed.
Repositories. An application may have a local repository that contains data that is specific for that use case.
BB Emulator. This component is applicable only to the GovStack sandbox/demonstration platform. In some cases, a simple/lightweight implementation of a Building Block has been created to provide the needed BB functionality for a particular use case. This reduces the infrastructure load for the sandbox.
These components may be adapted to a country specific context, but can be generic across multiple use cases in a particular GovStack implementation.
BB candidate software. This is a specific software platform that functions as one or more building blocks in a GovStack implementation.
Adapter. The adaptor provides a mapping from an existing software platform’s API to the format that is specified by the GovStack BB spec for a particular Building Block
BB Configuration files. These are the Docker files and startup scripts that allow a product to be automatically launched and configured in a GovStack environment.
Test Harness Scripts. These scripts configure any data or environment that is needed for a candidate application to be able to pass the tests for a particular Building Block in the testing application
Repositories. As noted, some repositories may contain use-case specific data. However, there may also be repositories that are needed by multiple applications or use cases.
While the following principles are relevant to many technology deployments, when leveraging the GovStack approach it is important to keep these principles in mind during all phases of design, development, and deployment.
Design of systems should be rooted in the needs of the citizens/users of these platforms. A Citizen-centric technology will include the following attributes:
User-centered design
Right to be forgotten: everything must be deletable
The best tools evolve from empathizing, understanding and designing for the needs of end-users. Accordingly, we’ve identified a series of use cases and user journeys here:
Each use case is composed of a collection of modules, or building blocks. As you can see, a relatively small set of these building blocks can be readily applied to a wide variety of applications in low-resource settings.
Where possible, GovStack advocates for the use of open technology, which can reduce cost and help avoid vendor lock-in. Open technology can be defined as:
Based on open standards
Based on Digital Development Principles, see and
Built on open-source software where possible
Supports open development, see
Cloud native where possible (Docker/Docker Compose/OCI containers)
Any Building Blocks should be developed in a manner which is sustainable and ensures that the technology will continue to be updated and maintained. Some core considerations for sustainability are:
Continuous funding for maintenance, development and evolution, which results in lower long-term costs
Uses microservices-based architecture instead of monolithic.
This increases interoperability, development and deployment speed and reliability.
Building Blocks are audited and certified before being made available
Development processes and standards enforce quality and security
Different certification levels reflect level of standards-compliance
Regular security scanning and auditing
Public ratings and reviews
Comprehensive logging and exception handling
It is vitally important that technology solutions be usable by all. Some characteristics of accessible design include:
Meets users where they are: web, mobile, SMS and/or voice. UI supports accessibility technologies, e.g. screen readers.
SSO allows for signing in once for multiple services
Deployment and development processes and standards are open to contributors
Community-driven development tools for documentation and support
Blueprints, templates and documentation
GovStack is rooted in the concept that Building Blocks should be re-usable and configurable, such that they can support multiple use cases with minimal effort:
Building Blocks can be reused in multiple contexts
Each Building Block is autonomous
Building Blocks are interoperable, adhering to shared standards
Building Blocks should be easy to set up
Standardized configuration and communications protocols should be used to connecting Building Blocks
Building Blocks can be provided as a service (ICT opportunity)
Deployments of Building Blocks should follow these principles:
Any client-facing functionality should operate in low-resource environments:
Occasional power
Low bandwidth
Low-reliability connectivity
Easily scalable for high availability and reliability
API-only based decoupling
Asynchronous communications pattern decoupled through rooms is ideal
Eventual consistency for data
As with any software implementation, there are constraints and limitations in the GovStack approach that must be addressed. In any country context, there will be deficiencies that present challenges to any technology implementation. In the context of GovStack, the constraints and deficiencies that may be present must be considered at the outset of any project.
The following list of potential deficiencies that may be encountered with high-level descriptions should be kept in mind during the development and deployment of any Building Block. Each building block specification SHOULD specify mitigations for these issues:
Poor or non-existent National ICT governance structure that makes decisions and ensures accountability framework to encourage desirable behavior in the use of ICT in the country. However, this may be described in documents but the implementation is suboptimal or not enforced.
No strategic policy framework for the acquisition and use of IT for social and economic growth in the country. The policy might be at the development stage and where the policy exists, the policy implementation is lagging or non-existent.
The development of IT infrastructure in the country is lagging behind or sub optimal because of poor policies and insufficient investments in the ICT sector. Low coverage of power or the national grid and little penetration of alternative sources of energy especially in the rural.
Limited funding for ICT projects and initiatives. ICT intervention may not be prioritized. No institutionalized or routinized support for ICT projects/ interventions by the government.
ICT projects and intervention are implemented in a silo, none standard approaches and most of the ICT interventions are proprietary and high cost ventures from private institutions. No national standard architecture for interoperability/ integration of systems
Low ICT literacy level among user, None or little research and development done by the national institutions/ academia on the use and scale up ICT in the country. Very few ICT professionals to support large scale ICT projects at national level
Lack of or minimum network coverage by GSM and or broadband technologies. Low cellular subscribers per capita and very low internet subscribers per capita. The percentage fibre connectivity in the country is low. A greater percentage of the population do not have computers, laptops or smart phones.
Number of household with internet connectivity is concentrated in the urban areas as opposed to rural areas.
Technologies, which are not always ready-for market, are often more expensive than incumbent technologies, without the necessary supportive infrastructure. Competition from existing technologies, including unsustainable technologies
New technologies require specialized knowledge and skills, which are often lacking in host countries where education levels in science, engineering and technology can be low, and emerging areas. ICT specialists is low
New technologies treated with suspicion in local communities especially if prior experience of job losses or unintended social consequences
New technologies are seen as a challenge to cultural traditions and communal activities. Technology can also face barriers such as language, role of women in the society, lack of entrepreneurs or dependencies created by decades of development aid
This document is intended to provide guidance for building block working groups and developers of products that will be integrated into a GovStack implementation. It also provides guidlines for implementers and system integrators who are deploying solutions that leverage the GovStack approach. It provides guidelines and principles that should be considered by all building blocks and cross-cutting requirements that must be considered for any GovStack project.
This will accelerate the collaborative development of best-of-breed digital public goods, enhancing efficiency and transparency across the world - especially in low-resource settings.
GovStack aims to provide a reference architecture for digital governance software to support sustainable development goals. Rooted in a "Whole-of-Government" approach, the GovStack Framework provides a methodology for leveraging common technology components and infrastructure to more easily create and deploy interoperable digital platforms which can address high-priority use cases across multiple sectors. The guidelines and requirements described in this document provide a framework for the development of digital building blocks oriented toward this goal.
The following provide criteria and definitions for Building Blocks, developed by organizations whose work is focused around achievement of the Sustainable Development Goals (SDGs). The criteria are drawn from the , developed by the International Telecommunication Union (ITU) and the Digital Impact Alliance (DIAL), as well as the developed by the Digital Public Goods Alliance (DPGA):
Refers to software code, platforms, and applications that are interoperable, provide a basic digital service at scale, and can be reused for multiple use cases and contexts.
Serves as a component of a larger system or stack.
Can be used to facilitate the delivery of digital public services via functions, which may include registration, scheduling, ID authentication, payment, data administration, and messaging.
Building blocks can be combined and adapted to be included as part of a stack of technologies to form a country’s Digital Public Infrastructure (DPI).
Building blocks may be open source or proprietary and therefore are not always DPGs.
"Building blocks can be as simple as a common set of rules or protocols (for example email programs like Simple Mail Transfer Protocol - SMTP), or complex (for example an open-source health information system like the DPG, District Health Information Software - DHIS2)“
Characteristics of building blocks:
Autonomous: building blocks provide a standalone, reusable service or set of services, they may be composed of many modules/microservices.
Generic: building blocks are flexible across use cases and sectors.
Interoperable: building blocks must be able to combine, connect, and interact with other building blocks.
Iterative evolvability: building blocks can be improved even while being used as part of solutions.
Per the DPGA definition, to be considered a building block, solutions must meet the following technical requirements determined by the GovStack Initiative which includes:
Open API, Open API Specifications, Rest API
Packaged in a container
Include a information mediator where communication flows between services that are not co-located
Building blocks are software modules that can be deployed and combined in a standardized manner. Each building block is capable of working independently, but they can be combined to do much more:
Building blocks are composable, interoperable software modules that can be used across a variety of use cases. They are standards-based, open source and designed for scale.
Each Building Block represents, as much as possible, the minimum required functionality (MVP) to do its job. This ensures each Building Block is usable and useful on its own, and easily extensible to support a variety of use cases.
A Building Block is composed of domain-driven microservices, modeled as closely as possible on existing roles and processes. This helps ensure each building block is as useful as possible in the real world.
Building Blocks exchange data using lightweight, human-readable data that can easily be extended where needed. Data models and APIs are described in a lightweight manner that’s human-readable, allowing them to be easily and quickly understood and validated.
A building block is only so useful on its own. In practice, building blocks MUST be connected together in a secure, robust, trusted manner that facilitates distributed deployments and communications with existing services.
Each building block deployment SHOULD use an Information Mediator to federate and communicate with other data consumers and providers, particularly when communicating between services that are not co-located. This ensures the confidentiality, integrity, and interoperability between data exchange parties. An Information Mediator MUST provide the following capabilities:
address management
message routing
access rights management
organization-level authentication
machine-level authentication
transport-level encryption
time-stamping
digital signature of messages
logging
error handling
monitoring and alerting
service registry and discovery
In order to effectively deploy a software solution using the Information Mediator, several policies and processes will need to be applied. This section briefly describes that organizational processes that must be in place.
First, a central operator will be identified and created. This organization will be responsible for the overall operation of the system, including operations and onboarding new members. Policies and contractual agreements for onboarding need to be created.
Next, trust services need to be set up internally or procured from third parties, including timestamp and certificate authorities. This provides the necessary infrastructure to support distributed deployments.
Finally, members can be onboarded and provided with access to the Information Mediator and methods to register the services that they provide as well as discover services that are available.
Once agreements are in place, members can deploy new services in a decentralized, distributed manner. Before deploying a new service, the central operator must be notified of any changes to access-rights, including organization and machine-level authentication before it can publish or consume data.
This section provides an overview of the technical processes and architecture that must be implemented once the organizational model has been created.
A Central Operator is responsible for maintaining a registry of members, the security policies for building blocks and other member instances, a list of trusted certification authorities and a list of trusted time-stamping authorities. The member registry and security policies MUST be exposed to the Information Mediator.
Certificate authorities are responsible for issuing and revoking certificates used for securing and ensuring the integrity of federated information systems. Certificate authorities MUST support the Online Certificate Status Protocol (OCSP) so that an Information Mediator can check certificate validity.
Time-stamping authorities securely facilitate time stamping of messages. Time stamping authorities MUST support batched time stamping.
The Service Registry provides a mechanism for building blocks to register the services that they provide and for other building blocks to discover and consume those services. Any services provided or consumed by a Building Block that leverages the Information Mediator architecture MUST use this service registry functionality.
The following provides definitions for terms that are used by various building blocks.
Registration: Any approval/license/certificate issued by a public entity as a result of a request/declaration made by a user of the public service. The result of a “registration” is usually a number and/or a document (called certificate, license, permit, authorization, registration, clearance, approval, etc.)
Authentication: This is the technical process of establishing that the credentials (i.e. username, password, biometric etc.) provided by a party (user, system, other) is valid and that the party can be granted basic access to system resources with default access rights. Note that authorization also needs to be applied for a party to access protected resources.
Authorization: This is the technical process of establishing whether or not an authenticated party has rights to access a given protected resource. Access rights can typically be granted or revoked administratively on a read-only and/or read-write and/or execute basis through an administrative provisioning process. Permissions or rights defined for a party typically manifest in an access token that is granted at the time of authentication for the party. Hence the processes of authentication and authorization are intrinsically related.
(Workflow) Activity - a single step in a workflow process.
(Workflow) Process - a workflow process contains one or many activities.
(Workflow) Instance - an instance of execution for a workflow process.
Stewardship is critical, see
From Wikipedia: a variant of the (SOA) structural style – arranges an application as a collection of services. In a microservices architecture, services are fine-grained and the protocols are lightweight.
With any technology deployment, security is paramount. Detailed security requirements are defined in the . Beyond those standards, Building Blocks should have the following attributes:
Additionally, the Principles for Digital Development are especially relevant when designing for low resource setting. Refer to for information on these Principles.
It is STRONGLY RECOMMENDED that a building block uses an information mediator (as described below and in the ) for any communications across the internet. An Information Mediator is not required for communication between building blocks which are co-located. In this case, communication may occur using standard API calls.
Refer to the full description of the for more information.
Within this document, the key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in when, and only when, they appear in all capitals, as shown here.
Workflow Terminology: See more comprehensive descriptions of the workflow terminology in the .
Developed by Max Carlson, Kristo Vaher, Steve Conrad, Dr. P. S. Ramkumar, Wes Brown, Aare Laponin, Uwe Washer, and Trevor Kensey
Building blocks are responsible for meeting all cross-cutting requirements or specifying why specific requirements do not apply. Govstack compliance and certification processes will validate these requirements.
See: TM Forum REST API Design Guidelines
Some key principles from these design guidelines are as follows:
APIs MUST not include Personally Identifiable Information (PII) or session keys in URLs - use POST or other methods for this
MUST support caching/retries
Resource identification in requests Individual resources are identified in requests, for example using URIs in RESTful Web services. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server could send data from its database as HTML, XML or as JSON—none of which are the server's internal representation.
Resource manipulation through representations. When a client holds a representation of a resource, including any metadata attached, it has enough information to modify or delete the resource's state.
Self-descriptive messages Each message includes enough information to describe how to process the message. For example, which parser to invoke can be specified by a media type.
See: TM Forum REST API Design Guidelines
Paraphrased from the Amazon API Mandate: https://api-university.com/blog/the-api-mandate/
All BBs must expose their data and functionality through service interfaces (APIs).
Building Blocks communicate with each other through these interfaces.
There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
It doesn’t matter what technology is used. HTTP, Corba, Pubsub, custom protocols — doesn’t matter.
All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
Building blocks MUST NOT use shared databases, file systems or memory for data exchange with other building blocks.
Use semantic versioning when documenting changes to API definitions. Any breaking changes to APIs MUST use a different endpoint, as shown here: e.g. /api/v1 and /api/v2
Documentation on the installation and use of the Building Block MUST be provided. Where possible, this documentation SHOULD be stored alongside code in a repository. Documentation MAY be generated from code where applicable.
Each building block’s service APIs MUST be defined and exposed using a standardized machine-readable language. External APIs are described using the OpenAPI 3.x specification. See the following resources for additional information:
Each building block MUST be ready to be deployed as independent container images. Source code and build instructions SHOULD be committed to a public code repository where possible.
A building block may be composed with Kubernetes or docker compose. All build files must be included alongside the source code.
When a building block requires deployment tools such as Kubernetes or Ansible, configuration and deployment scripts should be included in the building block repository. Use of this type of deployment configuration will make individual components of the building block independently scalable and make building blocks less monolithic and more efficient.
Building Blocks MUST conform to GDPR principles, including the right to be forgotten account deletion, and privacy requirements to protect the rights of individuals. Note that these requirements may vary by region, and building blocks must conform to regulatory requirements wherever they are deployed.
Building Blocks MUST have a mechanism for generating logging information. This may be as simple as using STDOUT and capturing through docker logs, or may use other log sinking technologies.
When Building Blocks require callback functionality, they must use webhooks and not direct links to functions within the building block.
All Building Blocks MUST support secure HTTPS transport with TLS 1.3 and insecure cyphers disabled.
GET and PUT APIs (as well as HEAD, OPTIONS, and TRACE) must be idempotent, returning the same value if called multiple times. POST and DELETE APIs will not be idempotent as they change the underlying data. Reference https://restfulapi.net/idempotent-rest-apis/ for more information.
API calls SHOULD be able to be made independently of one another. Each API call should contains all of the data necessary to complete itself successfully.
Transactions that cross multiple services SHOULD provide a correlation ID that is passed with every request and response. This allows for easier tracing and tracking of a specific transaction.
Some blocks may require the use of security keys. Those that do must have clearly defined key rotation policies to enhance security
Database processing tools like triggers and stored procedures should be avoided.
Each building block MUST be capable of running independently, without requiring other dependencies such as external data stores or other building blocks.
Configuration MUST be done using secure processes, such as environment variables or a secure secret store.
Designs should support occasional connectivity/low bandwidth, and should allow for asynchronous communication between building blocks. A Publish/Subscribe design pattern can be used to handle changes, allowing loosely-coupled solutions to be assembled without changing existing APIs.
JSON SHOULD be used for data models/services wherever possible. See https://www.json.org/json-en.html. Where JSON exhange is not possible, building blocks must use other standard data formats (such as XML).
If an existing standard is available, it should be used, e.g. DICOM/Hl7/FHIR for healthcare. TMForum has a large library of standardized APIs and data models that can be used.
Building blocks and building block solutions MUST leverage existing standards, especially those listed in the Standards section below.
Building blocks SHOULD validate all incoming data to ensure that it conforms with the expected format and type. APIs should also sanitize incoming data, removing any unsafe characters or tokens.
A building block MAY provide a mock testing implementation of API functionality to show example endpoints and data payloads. See https://github.com/GovStackWorkingGroup/bb-template/tree/main/examples for additional information.
Where a building block has a human user interaction, it SHOULD be able to present information to the user in their local language. Building blocks should be designed to support multiple locales.
Where precise timestamps are required, building blocks SHOULD leverage Network Time Protocol (NTP) to synchronize timestamps between servers.
Software development best practices are recommended for all building blocks. The following guidelines should be followed as part of the software development process.
No languages, frameworks, or dependencies should be used in a building block where that component has an EOL of less than 5 years.
See https://www.tiobe.com/tiobe-index/
Where possible, building blocks SHOULD be written using commonly used languages to ensure ongoing maintenance and support are as easy as possible. Building blocks MAY leverage less common languages, such as shell scripting where needed.
These should be run across the code base and dependencies, e.g. https://www.sonarqube.org/ and/or https://snyk.io/ .
Building blocks should include tests that provide both unit and integration test coverage
See https://standard.publiccode.net/ and practices outlined here:
Developed by David Forden, Jean-Reynald Vivien-Gayout de Falco, Dr.P.S.Ramkumar and Max Carlson
GovStack Reference Architecture document
It is vital that GovStack be able to connect to existing applications. Likewise, existing applications should be able to connect to and utilize GovStack resources as they see fit. This must be done without compromising the easy and secure interoperability provided by the GovStack system.
GovStack has defined a process for an existing product to become compliant with GovStack. This process is outlined in this document. Note that there are various levels of compliance that are defined. Any product owners or maintainers that are interested in the compliance process can use the GovStack testing platform to begin: https://testing.govstack.global
This section contains resources can be used by existing platforms to integrate into the GovStack ecosystem. The first section describes 'Adaptors' which can be used to translate an existing API into a GovStack-conformant API. The second section describes Native GovStack implementation. Finally we describe the GovStack Testing Harness which allows products to run automated tests that have been developed by the GovStack team to determine how closely an application conforms to the specifications and expected behaviors described by the Building Block specifications.
Adaptors are used to map existing APIs and functionality in a Digital Public Good into a format and scheme that is compatible with the GovStack API specifications.
Adaptors may transform data formats (ie. XML to json), may transform URLs/protocols, or may be used to map GovStack APIs and data structures into sector-specific standards (ie. FHIR patient records).
Adaptor: An adaptor provides both URL and payload mapping between an existing API developed by a DPG and the GovStack API definitions for the Building Block that the DPG provides functionality for.
Workflow: The workflow Building Block is used to manage more complex transactions where multiple Building Blocks must be called to complete a request (multi-part requests) or where retry/rollback/compensating transactions must be implemented.
Information Mediator: The Information Mediator is used transfer information securely between Building Blocks where communication occurs across the internet.
Adaptors should not be used for outbound requests.
In general it is the application which manages interactions and requests from various Building Blocks
In the event that a Building Block needs to make a direct request from another Building Block, it is recommended to use the Workflow Building Block to manage and URL and data mapping
Adaptors should always be tied to specific products. We will not have universal adaptors. Adaptors will be used to transform data based on specific APIs that have been defined by GovStack.
Adaptor technologies may be re-used for different adaptors or we may have a paradigm where a generic adaptor can be configured via config files
Adaptors Perform 3 distinct functions (all synchronous):
Class 1 - URL rewriting (mapping a GovStack URL to a URL from a native API)
Class 2 - Payload mapping (transforming data payloads between GovStack and native formats)
Class 3 - Synchronous 1:many calls and composition of responses into a single response
Note: async and/or complex transactions would require the use of the Workflow Building Block
Please refer to this document for additional context/information and example scenarios for adaptors. \
The highest level of integration involves existing products implementing APIs that can be directly consumed by other GovStack Building Blocks. This means that the application will provide APIs that are in alignment with the API specifications for the Building Blocks that they are supporting.
The following diagram shows a complete GovStack deployment with API gateways for citizen access via web or mobile and for existing applications to be able to call GovStack APIs on demand. The workflow building block is used as an adapter, exposing existing applications as GovStack resources via OpenAPI:
Here, citizens and existing applications are provided API access for requests into GovStack via a common API Gateway, while the workflow Building Block adapter provides outgoing API access to existing applications from GovStack Building Blocks.
The GovStack team has created a testing platform that allows existing products to validate their APIs against the GovStack specifications. The testing platform consists of a set of tests (written in Gherkin) that can be run against one or more candidate products. These tests are run on a Continuous Integration (CI) platform and are executed automatically whenever changes are made to the GitHub repository for the Building Block.
The test platform will provide a detailed output of the test results, showing which tests are passing and failing for each candidate product.
The testing platform can be accessed at https://testing.govstack.global
New tests may be created for a Building Block. These tests are stored in the 'test' directory of the Building Block GitHub repository (GitHub repositories for the various Building Blocks can be found here). A new tests can be added by creating a .feature file in the features directory. All tests are written using Gherkin. Note that supporting code to run these tests should be stored in the 'support' folder under features.
New tests will automatically be run when a test cycle is started for a Building Block.
A new candidate product can be integrated into the test harness by adding a new folder to the 'examples' directory of the Building Block GitHub repository (GitHub repositories for the various Building Blocks can be found here).
The 'examples' folder provides configuration files that will launch the product in the test (CI) environment using docker and docker compose. Testing a new product requires the addition of a new Dockerfile (or set of Dockerfiles) that will build the product, and a docker-compose file and docker-entrypoint.sh file that will launch and configure the product in the test environment so that it is ready to receive the test requests.
For more detailed information on how to create new tests or integrate new products into the test harness, please refer to this document, which is geared toward developers and technical teams.
The following standards MUST be used in the development of any Building Block. Adhering to common standards as listed below promotes interoperability and facilitates efficient data transfer between Building Blocks.
Used for encoding text https://en.wikipedia.org/wiki/Unicode
Used for dates and timestamps https://en.wikipedia.org/wiki/ISO_8601#Coordinated_Universal_Time_(UTC)
Used for exchanging data
Used for specifying data models. Note that OpenAPI 3.1 has full support for JSON Schema http://json-schema.org/
Used for implementing APIs
Used for specifying and documenting APIs. https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.1.0.md Note that OpenAPI 3.1 supports inline JSONSchema for model definitions
Used for packaging building block components for deployment https://en.wikipedia.org/wiki/Docker_(software) https://www.docker.com/resources/what-container
Must be generated with the ISO/IEC 18004:2015 standard
Govstack architecture enables an application to call services of another application within Govstack and get responses containing information from the called application. In many cases, control over the User Interface may need to be passed from an application to another Building Block. For example, if a user is doing a biometric or multi-factor authentication, the ID Building Block can present the UX to the user for that process. If a user is sending or receiving a payment, the UX can be handed off to the Payment Building Block for the user to enter account information for the payment. In general UI level switching may be necessary because
a. The called service may collect inputs from the user directly through its own UI as it is not preferred to exposed collected data collection to the calling application for security reasons.
b. It may be unreasonable to expect that the calling application designs screens of other Building blocks it calls, considering diverse requirements, standards, policies, etc., in respective domains.
c. Building blocks may be developed by different entities and evolve independently. Hence tight integration is not preferred, loosely coupled but secure interoperability is needed.
If the applications needing a service and the application providing the service are not co-located, then some control and data exchanges are needed at UI level, a secure authentication mechanism is needed before the service is provided. A few authentication mechanisms relevant to UI level switching between independent applications are discussed below:
Considering for example, a citzen already registered in a Govstack logs into the energy deparment's application to pay an eletricity bill. The application will submit the user's login credentials to an Identity server at its backend and gets in return a session token for that user, if authenticated successfully. On its UI the application presents a due electricity bill along with "Pay" button. When the user clicks the "pay" button, the application UI redirects the user to a payment building block ui to collect relevant payment. After payment is remitted the payment building block redirects back to the Energy dept application to confirm successful payment after which the application may present a receipt generated for the user.
Given that context, the following ways are possible for UX switching:
1. OpenID Connect based Single Sign-On (SSO)
A Single Sign On (SSO) system can be used, which allows an authentication token to be passed from the application to another Building Block. An authorization server is used to handle the initial user login and the access token received from the auth server can be passed to other Building Blocks and used to authorize access to the Building Block functionality.
OpenID Connect based Single Sign-On (SSO): SSO allows users to authenticate once and access multiple applications without the need for repeated login prompts adopting popular SSO protocols such as OAuth authorization and OpenID connect. OAuth framework allows users to grant access to their resources without sharing their credentials with the requesting application. It enables a user to authenticate with one application (called the "identity provider" or "authorization server") and obtain an access token. This access token can then be used to access protected resources in other applications (called "resource servers") without requiring the user to authenticate again. OpenID Connect facilitates SSO by allowing users to authenticate once with the identity provider and then access multiple applications without repeated login prompts. This improves user experience and reduces the need for managing multiple sets of credentials. The OpenID Connect builds upon OAuth 2.0 to provide user authentication as well as authorization. It allows for identity information to be exchanged between the identity provider and relying applications. When a user authenticates with the identity provider, the relying application (service provider) receives an ID token containing information about the user. This ID token can be used to authenticate the user in the relying application by cross verification with the identity server.
This mechanism has some inherent advantages such as:
Enhanced User Authentication: OpenID Connect provides a standardized and robust mechanism for user authentication. It allows for the exchange of identity information between the identity provider and relying applications, enabling stronger authentication and verification of user identities.
Standardized Protocol: OpenID Connect is widely adopted and standardized, making it easier to implement and integrate with various applications and platforms. It provides clear guidelines and specifications, reducing the complexity of authentication implementation.
User Profile and Attributes: OpenID Connect allows for the retrieval of user profile information and additional attributes from the identity provider. This can provide valuable user data to relying applications for personalization, authorization, and user-specific functionality.
There are some inherent disadvantages of OpenID Connect as well:
Authentication: The called application gets user authentication, but not authentication of the calling application, especially when the two applications are not colocated physically.
Increased Complexity: Implementing OpenID Connect can be more complex compared to token-based authentication or secure proxy methods. It requires understanding the underlying OAuth 2.0 framework and configuring the identity provider and relying applications accordingly.
External Dependency: OpenID Connect relies on an external identity provider for user authentication. This introduces a dependency on the availability and reliability of the identity provider. If the identity provider experiences issues, it can affect the authentication process for the relying applications.
Single point of failure: Centralized Identity server may lead to single point of failure, but leads to consistency and focus on managing security concerns at one place in the architecture.
2. IFRAME based Secure Proxy Authentication
In this case the calling application UI has an embedded screen component (iframe) that internally points to the called application’s webserver URL. Information within the iframe can be isolated from the main application or may exchange select information through triggered “events” exposed between them. An iframe (inline frame) is a HTML element that allows one to embed one HTML document within another. In the context of authentication and secure redirection between UIs of different web applications, iframes are typically used as a part of the secure proxy mechanism. Specifically, iframes can be used within the secure proxy to load and display content from a different web application or domain while maintaining the security boundaries between the two applications. The iframe can serve as a container for displaying the UI of the target application within the UI of the calling application. The use of iframes in this context is often employed to achieve seamless integration and user experience between different web applications, allowing for the rendering of UI components from multiple sources within a single interface.
Assuming the user has already logged into the parent (calling) application, then some action in the main application screen (like clicking "pay" button may invoke the iframe, the application verifies role based access permissions of the user to invoke payment BB service and then transfer control to the IFRAME, which inturn invokes the UI of the called application/BB which is linked to the iframe at build time. The Payment BB receives the request, presents its ui in the iframe and finishing its business it posts a status update event in the IFRAME with relevant details. The IFRAME relays the event to calling application and then the application will process the details and further steps appropriately.
The secure proxy mechanism provides several advantages:
Centralized Security: By intercepting and controlling client requests, one can enforce security policies consistently across multiple applications and services. The calling application owns the responsibility to use the secure proxy as a central point for authentication and authorization, while the called applications need not handle these concerns and rely on the calling application as source of truth.
Simplified Logic: With a secure proxy handling authentication and authorization in the parent application, called applications can focus on their core functionalities rather than implementing these security mechanisms independently. This simplifies application development and maintenance.
However, it has some limitations and challenges:
The calling application places trust and control in the calling application hence the Risk of serving a calling application that is already compromised.
Since applications and BBs may be typically separate third party products their developments may not be synchronized.
It is important to note that the secure proxy mechanism introduces an additional component to the architecture, which requires proper configuration, maintenance, and monitoring. It also adds an extra network hop and potential performance overhead, so it's crucial to ensure that the proxy infrastructure is appropriately scaled and optimized to handle the expected traffic.
The specific implementation of the secure proxy mechanism can vary based on the chosen proxy software or infrastructure components.
Key-based, decentralized Authentication
This method involves generating and exchanging dynamically generated key between the applications. Assuming the user has already authenticated and logged into first application the user starts transaction in that application. When the user clicks a relevant button (e.g. "pay") on the screen of that application, the application obtains a unique temporary key from the target application by making a specific api request through the information mediator. Then the application's frontend redirects to the URL of webserver of the target BB/application and passes the key along with other relevant data. The called application validates the key internally before providing the UI to complete payment transaction. After completion the BB's backend returns the passed/failed status to its front end ui. The UI then redirects back to the calling application returning the status as a payload. JSON Web Tokens (JWT) is a commonly used token format.
This mechanism has some advantages:
Simplicity and Flexibility: Token-based authentication, such as JWT, is generally simpler to implement and understand compared to OIDC. It provides flexibility in how tokens are generated, validated, and managed, allowing for customization based on specific requirements.
Decoupled Architecture: Key-based authentication enables a decoupled architecture where the authentication process is not dependent on the identity provider. It also distributes key-based authentication process load across called BB/Applications and not on centralized server. Hence is also not dependent on a single-point-of-failure.
Scalability: Key-based authentication can be more scalable since it does not require communication with an identity provider for authentication. The server can verify the token locally, reducing external dependencies and potential bottlenecks.
Dynamic provisioning: Since what is contained in a key is decided by the called application, it is possible to generate unique keys for each service call (instead of session wide tokens) enabling higher level of security.
There are some inherent disadvantages of Token-Based Authentication as well:
Lack of Standardization: Although JSON Web Tokens (JWT) signed by JSON Web keys (JWK) are standard formats for encapsulating unique authentication information, the actual payload of a key is not a standardized specification for authentication. This can result in varied implementations and interoperability challenges when integrating with different systems and applications.
Authentication: The called application gets authentication of the calling application because it gets request for key through the backend via Information mediator, which allows only registered applications to send requests. However, this does not authenticate the user. It trusts that the calling application has appropriately authenticated the user.
Additional Development Effort: Implementing token-based authentication may require more development effort to handle aspects such as token generation, validation, and session management. Customization and maintenance of these components can be time-consuming.
4. Hybrid Model
This is a combination of openID connect and Token based connect models and hence advantages of both user and application authentication. In this case the user logs in into the the calling application/building block(in this example, Registration), which authenticates the user credentials from an identity server and obtains a unique session token for that user. The registration process may collect required details and present a “payment” button to the user. When the user clicks it, the application will send a request for a one time key from the called building block(in this example, the Payment building block). It then redirects to the url of payments BB page and passes the user token and the key as part of the payload it needs to transfer to payments BB. The payments BB now verifies the key to make sure a valid registered application is sending the request and then authenticates the user token with the identity server to ensure an authorized user is requesting the service. After confirming this, it will put up the required UI page for collecting user payment. Once the payment process is complete it will redirect back to the calling application screen url, along with the payload containing the same token, key along with success/failure status code. This switching can be multilevel in the sense that the same protocol can be used by payments building block to switch to another building block at the front end if required. In such a case the returning path shall also be in the reverse order of the forward switching path, so that appropriate keys are used in each nested branch.
Any of these mechanisms may be used, depending on the implementation. In general GovStack recommends option 1 or option 4.
The key security functionalities outlined here describe the required facilities that this security building block MUST provide as well as security compliance measures that must be implemented by all building blocks. Note that specific API definitions are not likely to be created by the security building block as any interfaces required are to be based on open standards and implemented as part and parcel of acquired solutions. A good example of this is the adoption of standards like OAuth2 and OpenIDConnect for authentication and authorization. The functional requirements for the implementation of an appropriate API Management and Gateway services solution can be found in a separate section of this document below.
The basic framework by which security is addressed for GovStack is largely based on the (hitherto referred to as NIST CSF) and the standard (hitherto referred to as NIST 800-171) for managing controlled unclassified information (CUI) but does also incorporate other security related requirements. The GovStack security requirements are also informed by and should be compatible with the (hitherto referred to as ISO-27001).
The Specific GovStack Security Related Concerns is organized in terms of the major functions of the NIST CSF which is defined by NIST as 3 major approaches/facets for implementation:
This section of the document provides a specific list of the concerns, principles, procedures and actions (collectively termed as security related issues) that have been identified for GovStack along with:
How each issue maps to the existing building blocks and their respective working groups
What type of organizational risk is anticipated if the issue is not addressed (i.e. high/medium/low)
Which target phase of the project the issue must be addressed in (i.e. first/second) - third phases are usually never completed
How feasible it is to address the issue in a limited-resource or low-resource setting (predominantly related to costs) - see the document for the definitions associated with low-resource settings.
A general description and/or discussion of the issue that needs to be addressed and the various alternatives available for addressing it (not exhaustive). The security issues are organized by the 5 NIST functions (Identify, Protect, Detect, Respond, Recover).
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity Description: Authentication and Authorization MUST be addressed across the board. It is likely to be built-in to API Management and Gateway and accessed by mobile and web applications using a token based approach. All communications from all clients (web/mobile/BB clients etc.) MUST be via API so this is a sensible point of implementation given the stateless nature of applications. Authentication and authorization MUST also be addressed at an application access level for each and every application (web/mobile/desktop etc). It would be wise to utilize the same framework and capabilities for this. Comments: Each building block MUST implement centralized authentication and authorization (minimally proxied or implemented via the common IAM solution and/or API Management and Gateway services).
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Credential strength management is likely built-in to API Management and Gateway. All communications from all clients (web/mobile etc.) must be via API so this is a sensible point of implementation also. Each application MUST provide the ability to determine credential strength at the time of registration and thus permit or deny the offered credential. Comments: Each building block MUST implement Multi-Factor Token and Password Strength/Complexity Management for an indeterminate array of factors including biometric (this is to be implemented through leveraging a common IAM solution).
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Access control is also likely built-in to API Management and Gateway. All communications from all clients (web/mobile etc.) must be via API. The thin client nature of web and mobile dictates that all resources will exist behind API interfaces so this is the most sensible point to address it (i.e. either they have access to the API or they do not). Comments: Each building block MUST implement role based access control for all exposed API’s and resources (minimally proxied or implemented via the API Management and Gateway services)
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Will need a process based solution for this. This selection will largely depend on the identity management and API management infrastructure chosen but will likely need to be a customized process that integrates provisioning across a number of products and services. Comments: Each building block MUST implement the ability to provision, deprovision and manage Identities and access rights (this may or may not be centralized for the whole architecture as a unified provisioning process).
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Likely built-in to API Management and Gateway. All communications from all clients (web/mobile etc. must be via API) Comments: Each building block MUST implement access and authorization audit, logging, tracing and tracking with alerts (minimally proxied or implemented through the API Management and Gateway services).
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Significant functionality provided by MOSIP - TBD Comments: Each building block dealing with physical devices MUST implement end user device registration, deregistration, re-registration and device platform security guidance/requirements to end users.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Significant functionality provided by MOSIP - TBD Comments: Each building block dealing with biometric credentials MUST implement biometric security credential management, registration, deregistrations, re-registration, validation and device platform security etc. such as above for biometric capture devices.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Likely built-in to API Management and Gateway but may also require certificate server etc. All communications from all clients (web/mobile etc. must be via API) Comments: Each building block MUST implement a framework for non-repudiable transactions using certificates and federation protocols (X509, OpenID and SAML 2.0 etc.)
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Identity/Security Description: Likely built-in to API Management and Gateway. All communications from all clients (web/mobile etc. must be via API) Comments: Each building block MUST be able to implement single-sign-on integration with 3rd party security.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All Description: Applies to all connections throughout all components such as:
Web/Mobile UI<->API, Web/Mobile UI,<->Auth, BB<->API<->BB, Workflow<->API Comments: Each building block MUST implement SSL and TLS based connections for all TCP connectivity both external to the building block, and internally between components in a selective manner depending on data requirements.
4.2.2.2 Data Sovereignty/Residency Controls and Hosting, Transmission, Backup and Recovery
Organizational Risk Rating: Medium Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: All Description: Probably needs to be addressed during country rollouts due to data sovereignty regulations but needs to be catered for in the architecture options. Comments: Each building block where dealing with citizens data MUST provide the ability to implement data sovereignty/residency controls, hosting, transmission, backup and recovery in compliance with specific national laws and guidelines in each country.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Cloud Infrastructure and Hosting Description: Infrastructure oriented but could be addressed at least in part by software defined networking (SDN) which can be part of a modern PaaS such as OKD Comments: Each building block shall comply with the overall secure networking architecture that will be deployed in place with each country implementation. Issues such as network security, networking protocols and firewall implementations etc. shall be defined as a part of the recommended architecture showing the various zones and separations required.\
4.2.2.4 Application Services Security (multi-tenancy etc.)
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Likely best implemented as a feature of the chosen PaaS framework such as OKD Comments: The components for each building block are required to be deployed in containers according to the architecture description. The chosen container orchestration platform and PaaS solution shall provide the means to implement Application Services Security (such as multi-tenancy etc.)
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Infrastructure oriented but could be addressed at least in part by software defined networking (SDN) which can be part of a modern PaaS such as OKD Comments: Each building block MUST comply with a defined set of VPN and Secure Network Access Controls. This will be based on the location of event producers and consumers on the network and hope the various segments of networking are sliced and protected. These standards are to be defined as a part of the target architecture implementation for each country.
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Can be sourced from open source tooling such as OpenVAS, Wireshark etc. Comments: A suite of open source tools are to be adopted for the purposes of Network Vulnerability Scanning. These tools MUST be acquired by the project and centrally deployed in each country to ensure adequate network service security is in place.
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Can be sourced from open source tooling such as OpenVAS, Wireshark etc. Comments: A suite of open source tools are to be adopted for the purposes of Network Vulnerability Scanning. These tools MUST be acquired by the project and centrally deployed in each country to ensure adequate network service security is in place.
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Infrastructure oriented but could be addressed at least in part by software defined networking (SDN) which can be part of a modern PaaS like OKD Comments: The project MUST adopt a software defined networking solution as a part of the core deployment architecture. This can and SHOULD be implemented as a part of the chosen PaaS solution.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Infrastructure oriented but could be addressed at least in part by OS like Centos which can be part of a modern PaaS such as OKD Comments: The project MUST deploy all services (typically microservices) on an immutable operating system infrastructure. Typically this can and SHOULD be provided as part and parcel of the chosen PaaS solution. The reason for this is such that if a security breach does happen then the operating system running the component cannot be modified by the offender.
Organizational Risk Rating: Medium Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Can be addressed as part and parcel of a modern PaaS solution such as OKD including a service mesh such as ISTIO Comments: The project MUST implement a PaaS infrastructure that supports Network, Service and Transaction Observability and Visibility for protection against flaws and faults. This is so that complex transactions involving multiple components (likely microservices) can be observed and traced for the purposes of debugging and auditing. This would typically be implemented at least in part by a service mesh feature of the PaaS along with other integrated components for visualization such as the Jaeger open source tracing facility and the Kiali open source visualizer for example.
Organizational Risk Rating: Medium Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low Building Block Mappings: All Description: Needs to handle insecure WIFI, email spam filtering, virus scanning and ad-blockers etc. at all levels. Comments: Needs to handle insecure WIFI, email spam filtering, virus scanning and ad-blockers etc. at all levels.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All Description: Can be addressed as part and parcel of a modern PaaS solution such as OKD (which has secure encrypted stores for credentials) Comments: Each building block MUST adopt the facilities of the chosen PaaS for implementing Cloud Platform Configuration Management and Securing Configurations. For example authentication credentials for common components like databases need to be managed appropriately and simply across multiple environments through DevOps automated deployment etc.
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: The same security protocols, data encryption, protections and monitoring to be applied consistently both internally and externally. Comments: Each building block MUST adopt the same consistent security and privacy implementation measures to protect against Insider threats and internal audit-ability as those adopted for external exposures. This is a general statement.
Organizational Risk Rating: Medium Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: High Building Block Mappings: All/Cloud Infrastructure and Hosting Description: Can be managed by implementing open source tools/services such as CrowdSec and DDOS Deflate, Fail2Ban, HAProxy, DDOSMon, NGINX etc. Note that HAProxy is built into PaaS solutions like OKD - TBD. Note that a number of API Gateway products are built around NGINX Comments: The project must implement an open source Denial of service attack prevention solution across all interfaces exposed to the public internet for each country deployment. This can be implemented through a reverse proxy web server environment using open source tools such as Fail2Ban, DDOS Deflate or HAProxy.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: The implementation of a centralized API Management and Gateway solution is probably not negotiable. All API interfaces both internal and external must be managed through this facility. Architecture will likely require a separate gateway for both internal and external services. Comments: The project MUST implement centralized API Endpoint Security Policy Management and Gateway Services (both internal and external). The reason for this is to implement a consistent layer of security for API interfaces that can be both managed centrally and alleviate the service developers from the implementation complexity of security. This can and SHOULD be implemented through an open source API Management and Gateway services product.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: Requires extensive and probably commercial software in many layers. Comments: The project MUST implement protection from and detection of Virus/Malware and Ransomware Attack. This likely requires a commercial solution suite as open source offerings are insufficient and not suitable for massive country-wide deployments w=such as GovStack.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: Can be addressed as part and parcel of a modern PaaS solution such as OKD (which has secure encrypted stores for credentials). Requires implementation through all development procedures and CI/CD etc. Comments: The project MUST implement credential theft prevention as part and parcel of its selected PaaS infrastructure. This is essentially an encrypted keystore that can host sensitive credentials and provide access to them with policy based security.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: Can be addressed through open source tools such as those provided by the OWASP Foundation. Plugins for OWASP tools are available for PaaS solutions like OKD Comments: The project MUST implement SQL Injection Attack prevention as part and parcel of any and all applications development across building blocks. This can be implemented through open source tools such as those provided by the OWASP Foundation. This can and SHOULD be implemented as plugins through the PaaS solution and fully integrated into the DevOps toolchains for the project build and deployment.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low-Medium Building Block Mappings: Security Description: Typically these types of attacks are best mitigated by the use of commercial open source subscriptions as they close the window of vulnerability. New CVE’s happen every month and are becoming more common as IT advances. The upstream open source projects like OKD will eventually get the patches but it will be some time after commercial open source product patches are released to those with enterprise subscriptions. Comments: The project MUST implement solutions for Containing, Managing and Mitigating Hardware/Firmware Vulnerabilities and Known Exploits (directory traversal, rowhammer, spectre, meltdown, LazyFP etc.). This can and SHOULD be implemented through plugins to the PaaS solution and DevOps toolchain for the project to ensure that every single component deployed is scanned for known CVEs.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: For the most part this involves end-point protection. Some tools are available in open source but it requires multiple layers of implementation and is thus more expensive. Comments: The project SHOULD implement solutions for prevention against Data Privacy/Loss/Leakage/Confidentiality (individuals and organisations). This likely requires expensive commercial packages rather than open source tooling.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: Needs to be considered for each data store and each connection in the architecture. It is a very broad topic that must be addressed in the context of the data confidentiality requirements for each country implementation. Will be relatively expensive for resource-limited settings but it is a necessity. Comments: The project MUST implement solutions for Data Security (at rest and in transit - i.e. encryption and obfuscation etc.) consistently across all datastores and connections within and surrounding all building blocks.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: This is complex and applies to large architectures with multiple data centers and perimeter services. Every single element of the networks and trusts must be examined for vulnerabilities. There is no simple one-shot solution for this as it involves multiple data centers and multiple layers communicating and replicating potentially sensitive data. The degree to which this is addressed is commensurate with the relative sensitivity of the data and services involved. Comments: The project MUST provide Replication and Perimeter/Edge Data and Services Security. This assumes that practically all national deployments will have multiple sites that must be protected from breaches and leakages as a result of technology services deployed to distribute and consolidate data.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: This is a complex subject that requires not only technical intervention but extensive training for everyone involved in the processes of eGovernment. Comments: The project MUST provide protection from Social Media, Social Network and Social Engineering Threats. This is not only a technology services issue but also applies to standard operating procedures and requires extensive user education.
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security/Cloud Infrastructure and Hosting Description: Can be addressed as part and parcel of a modern PaaS solution such as OKD - TBD (which has over the air update capabilities along with centralized management and monitoring). Comments: The project MUST provide Centralized PAAS Management, Monitoring and OTA (over-the-air) Automated Update for infrastructure and applications components. This is to ensure that patches to emerging and common vulnerabilities are addressed in the smallest window possible and the whole architecture is not exposed through such vulnerabilities. This can and SHOULD be addressed as part and parcel of the selected PaaS infrastructure.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security/Registration Description: Can be addressed as part and parcel of a modern PaaS solution such as OKD - TBD (which has an embedded services registry) - there may be other registries required that are also impacted. There needs to be careful delineation of functionality and terminology used for registries as there are many implications in PaaS environments such as for service exposure through software defined networks etc. Comments: The project and all building blocks utilizing registries of any kind (particularly digital service registries) MUST provide Digital Service Registry Security. This means ensuring that the protocols, interfaces and connections to such centralized services are controlled in accordance with the other requirements for connections and API etc.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security/Cloud Infrastructure and Hosting Description: Each core network service must be addressed in its own right. There is too much to discuss here but these services are critical, exposed, prone to vulnerabilities and must be secured with the highest possible standards applied. Comments: The project MUST address the general Surrounding Networking Software Infrastructure Security (DNS, DHCP , PXE, BootP services etc.) for each and every country rollout. These services are particularly vulnerable as some of them are exposed to insecure zones (especially DNS).
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security/Cloud Infrastructure and Hosting Description: Each cloud service must be addressed in its own right. There is too much to discuss here but these services are critical, exposed, prone to vulnerabilities and must be secured with the highest possible standards applied. Comments: The project MUST provide protection against common vulnerabilities in Cloud Provider Infrastructure Security (hardware layer, infrastructure layer, virtualisation, container and platform layer, application layer etc.). This is specific to each public cloud provider where utilized but a common source of threats since they involve complex suites of services that are stitched together (most often by the implementer not the cloud provider) in multiple layers to form the solution architecture.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security/Cloud Infrastructure and Hosting Description: This is principally the same as endpoint security to prevent information leakage. Comments: The project (including all building blocks MUST provide protection against private information leakage through Compressed and Encrypted Information Transmission via Messaging and Email etc. to external or internal 3rd parties along with any other potential channels for critical private information leakage.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: Phishing and the associated spam are two of the most common digital platform security problems and must be dealt with. A number of open source solutions exist such as OrangeAssassin, MailScanner and Apache SpamAssassin. These are commonly used by many commercial sites all around the world. Comments: The project MUST provide Anti-Phishing and Anti-Spam tooling. These are some of the most common sources of security issues and can be dealt with using open source tools such as Orange Assassin, MailScanner and SpamAssassin.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: Physical security is a must for all on-premise facilities and almost goes without saying. The extent to which physical security is implemented must comply with national government hosting standards in each country. Comments: The project MUST provide physical security measures for access to physical facilities.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: This is commonly known as endpoint security and must be dealt with comprehensively either at hardware level (by disconnection) or in software that allows policy and procedure for removable media to be controlled. Comments: The project MUST provide portable and removable media controls to protect against information leakage.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: This is a complex and diverse area to address as there will be several data repositories and databases in the deployed infrastructure including both CUI and regular non-CUI information. Backups are one of the areas that create large exposure for information loss and must be addressed consistently and thoroughly.
This must include encryption of backups and the physical security of backups as well as the policy, procedure and controls to ensure that leakage risk is minimized. Comments: The project must provide backup information controls and security to prevent information leakage and tampering etc. through the backup and recovery processes. This applies to all building blocks.
Organizational Risk Rating: Medium Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: Several open source and commercial tools are available to address this in terms of scanning for vulnerabilities in all layers of the solution including containers for example. The process and tools for this need to be addressed specifically depending on the technical components of the final solution architecture. Comments: The project MUST provide tools to detect, manage and automate the resolution of common vulnerabilities in all layers (applications components down to infrastructure components) before they eventuate as deployed in the solution. There are many open source scanning tool options available for this purpose. CISA defines the standards for this and refers to many of the available open source tools.
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: This needs to be built-in to the CI/CD process to ensure that only secured images are deployed to production. Most modern PaaS offerings such as OKD - TBD are also shipped with a rudimentary suite of tools to accomplish this. Comments: The project MUST provide a secure DevSecOps process for code deployment. This applies to areas surrounding Applications Development, Deployment, DevSecOps and Container Image Security scanning (i.e. what's inside the container) before applications components are deployed.
Organizational Risk Rating: Medium Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low-Medium Building Block Mappings: Security Description: These are global standards for security compliance checking and several tools are available in the market to address this. Both commercial and open source options are available. Comments: The project MUST provide tooling support for Compliance Checking and Scanning (PCIDSS/HIPAA, CIS etc.). A number of open source tools and commercial off-the-shelf tools are available for this purpose. This compliance checking MUST be conducted on a regular basis for all building blocks dealing with financial, healthcare and personal data in accordance with the aforementioned PCIDSS, HIPAA and CIS standards.
Organizational Risk Rating: Low Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low-Medium Building Block Mappings: Security Description: Several tools are available in this space with varied ways of addressing the concerns including threat modeling Comments: The project SHOULD provide tools for Security Risk Profiling. Several such tools are available and offer assessment of security risks through techniques such as threat modeling.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: Several open source tools are available to address this space admirably including for example Snort, Bro, Kismet, OSSEC and Open DLP etc. These are VERY effective, VERY mature and easily adopted. Comments: The project MUST deploy tooling for Threat/Intrusion Detection and Prevention in the infrastructure and other layers. Several such tools are available in open source (Snort, Bro, Kismet, OSSEC etc.)
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: This should be managed with the basic user device profile and an alert sent via email and other channels whenever a login is made from a new endpoint, giving the user the option to lock the account. Comments: The project across all building blocks SHOULD provide Login Notifications and Alerts etc. (new device or IP etc.) where email or other notifications would be sent to recipients with warning messages when authentication is performed from a new device. This also involves keeping a registry of registered devices for each user/party/actor (albeit internal or external and the ability for them to lock their account if the authentication was achieved by an unknown party.
Essentially OSINT puts the hackers' tools into the security analysts protection arsenal. Comments: The project SHOULD provide Open Source Intelligence (OSINT) Platforms/Tools and Processes in order to perform security assessments on open source usage. This essentially incorporates the hackers tools into the protection and detection arsenal (see the evolution of Metasploit and other white hacker solutions in open source).
Organizational Risk Rating: Low Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low Building Block Mappings: Security Description: This is an emerging area of value that largely comprises of the following types of visualizations but there are no comprehensive open source tools available:
Perimeter Threat
Network Flow Analysis
Firewall Visualization
IDS/IP S Signature Analysis
Vulner ability Scans
Proxy Data
User Activity
Host-based Data Analysis Comments: The project SHOULD provide tooling for Information gathering and Security Data Visualization for maintaining security observability through operations. Many different visualizations are available through various commercial tools but no comprehensive open source solution is available.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: Incident Response and Management (ticketing etc.) - applies to attempted and successful intrusion, fraud, hacking, phishing incidents and all other forms of security incident Comments: The project MUST provide Incident Response and Management (ticketing etc.) - This applies to both attempted and successful intrusion, fraud, hacking, phishing incidents and all other forms of security incident. This applies across all building blocks and must be built-in to the infrastructure and processes of each building block when any incident is detected.
Organizational Risk Rating: Low Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low Building Block Mappings: Security Description: This would involve creating a sandbox environment to test and resolve security issues thus requiring a complete sandbox of all the security tools mentioned here. Comments: The project MUST provide a Security Sandbox Solution - used to test responses to potential/predicted and actual security incidents.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Low-Medium Building Block Mappings: Security Description: This is more of a planning and execution exercise and simply must be built-in to the overall deployment game plan for every single piece of software infrastructure and data infrastructure along with the processes and procedures as well as test recoveries to ensure that an adequate recovery response can be attained on demand. Comments: The project MUST deal with Critical digital infrastructure business continuity considerations (terrorism, sabotage, information warfare, natural disasters etc.) - i.e. provide the technical ability and processes required in order to recover the complete digital infrastructure. This applies to all building blocks and must also endure recovery testing on a regular basis.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: This is similar or the same as the concern above and simply elaborates it further. Comments: The project MUST deal with Specifically Security Related Concerns surrounding BIA/DRP/BCP (disaster recovery, business continuity etc.) - what this means is how to recover to specific data versions using logging, tracking and tracing information to determine the best recovery path. This also covers the security of the backups themselves to prevent fraud, tampering and information leakage during storage or recovery for example and MUST address the exact data security requirements stipulated throughout this document but in the context of backups.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Low-Medium Building Block Mappings: Security Description: This is a more advanced and aggregated way of determining the overall security posture in respect to public cloud based deployments and takes into account all of the risks. There are a number of open source solutions available such as OpenCSPM (cloud security posture management). Really only applicable with deployments on public clouds but becoming essential. Comments: The project SHOULD implement Cloud Security Posture Management (automation of identification and remediation of risks with public cloud services). Open source tools such as OpenCSPM are available.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security/Registry Description: It seems the concept of registry is morphing into something more generic than it was originally (which seemed to be a service registry). The state recovery for this is contingent on the technology solution that the Registration BB takes. Losing the state of this registry due to a security issue is a key risk that MUST be mitigated. Comments: The project MUST provide the ability for specific Digital Service Registry State Recovery (point-in-time). Registries are one of the most likely early targets of cyber-criminals. This applies predominantly to the Registry building block.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: This is all about how information is controlled throughout systems based on its secrecy/privacy classification.
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: See above - requires a physically separate domain for hosting such information. Comments: This is related to the above but the project SHOULD provide Controlled Unclassified Information (CUI) domain isolation (isolation for sub-networks and security domains etc. handling CUI). \
Organizational Risk Rating: Medium Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: By virtue of the fact that each BB will host its own UI there is strong potential for cross-site scripting vulnerabilities. Rules must be adhered to by developers. For example, all DOM based XSS reflection or embedding must be performed on the server side not in the ECMA layer. Several other rules must also be implemented for developers to reduce the likelihood of XSS vulnerabilities. Many of these rules can be found here on the OWASP web site: Comments: The project must provide a consistent set of rules for cross site scripting across all building blocks to ensure that it is not exposed to Cross-site Scripting (XSS) Attacks. This is a complex area that must be addressed during development and testing. Details of many of the rules that must be implemented can be found on the OWASP web site.
Organizational Risk Rating: High Target Deployment Phase: First Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security/Cloud Infrastructure and Hosting Description: Mobile and wireless networking security is a significant issue to deal with given that there will likely be mobile applications deployed as a part of the architecture. A plethora of issues abound including for example Evil Twin Attack, WarDriving (piggyback attack), sniffing and shoulder-surfing etc.. CISA has released a short guide here for securing wireless in general: Comments: The project MUST provide protection against Mobile and Wireless Networking Security/WIPS vulnerabilities. There are many potential and known vulnerabilities around these networks which have predicated information security in the digital age. The potential for eavesdropping and information leakage as well as other forms of hijacking attacks is very broad as the exposure is over-the-air.
Organizational Risk Rating: Low Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: High Building Block Mappings: Security Description: OSINT gathering is COMPLEX. The following article describes the origins and tools of OSINT that have evolved from the original Metasploit (white hacking solution):
Organizational Risk Rating: High Target Deployment Phase: Second Feasibility for Limited/Low Resource Settings: Medium Building Block Mappings: Security Description: Fraud detection and management is the set of activities carried out to check money or property from being acquired through false pretenses is known as fraud detection and the ability to trace and manage incidences of fraud. In many industries like banking or insurance, fraud detection is applied. SInce this project involves money exchange there is a case for fraud detection and management. Number of open source tools including the following are available: FraudLabs Pro, Fraud.Net, MISP, pipl, Sift etc.. See this article for a deeper description: Comments: The project MUST provide tooling for Fraud Detection and Management. This is not negotiable and many open source tool sets currently exist such as FraudLabs, Fraud.Net and MISP. This applies to all building blocks dealing with financial transactions and information (such as payments - which appears to be bidirectional… i.e. govt-to-recipient and payee-to-govt).
We have made an assumption that GovStack will only ever have to deal with what is known as CUI () as opposed to CI (Classified Information - which is the type of information managed by security agencies such as the CIA).
CUI MUST be managed in accordance with NIST Comments: This is a bit more general but the project SHOULD provide the ability to manage and recover Controlled Unclassified Information (CUI) Registries, Repositories and Processes (i.e. marking, safeguarding, transporting, disseminating, reusing and disposing of controlled unclassified information). This is to be in accordance with NIST 800-171 Rev.2 (see References and Standards). This applies to all building blocks that deal with CUI (usually information collected by govt and security agencies) which is likely to also be specific to country implementation.
The following standards are applicable to all aspects of the security building block and cross-cutting across other building blocks. Note that these are not technical standards but the process framework standards that shall be used to guide security decisions on the project::
All of the implementation processes and guidance MUST follow the NIST CyberSecurity Framework (see Ref 2)
All of the security issues and concerns to be addressed are related by number to the core requirements above. The detailed definitions can be found in the document entitled Digital Platform Security for GIZ, ITU DIAL GovStack (see Ref 5)
It is assumed that the maximum level of information security required is what is known as CUI (Controlled Unclassified Information). Processes dealing with CUI must conform with NIST SP 900-171 Rev.2 (see Ref 3)
The Security Requirements document provides cross-cutting guidance for any GovStack implementation, whether an individual Building Block or a full GovStack solution to address one or more use cases. It provides a reference for security concerns and requirements for how to implement and deploy secure solutions.
This document also describes a set of 'Authorization Services' that should be implemented for any GovStack implementation. The authorization services provide secure communication between building blocks as well as a mechanism for user authentication and definition of roles and permissions for users.
Security requirements address all cross-cutting security issues and concerns for the whole GovStack digital platform including every layer, every building block and all applications. Although other building blocks address “some” security aspects such as “Identity building block” (addressing the foundational identity aspects and document workflows etc.) the resultant solutions delivered by all building-blocks (including the “Identity building block”) MUST comply with the standards and requirements set by this security requirements document. This document covers security requirements of two types:
Build-time Security: These are considerations for embedding security during development of building blocks and applications.
Deployment time Security: These are considerations for enforcing security measures in deployed systems during run-time.
These may consist of cross cutting functionalities that can be utilized for various building blocks and specific requirements for the Security Building Block itself, to provide secure internet access for user interaction with applications and building blocks in Govstack.
The security requirements are based on the NIST CyberSecurity Framework and defined herein through review of GovStack use cases and best practices for securing and hardening government infrastructure. It MUST also be noted that the security building block defines the core requirements to implement policy based API security and management across the internal building blocks as well as external applications and 3rd party services consumption. This is based on the architectural assumption that all inter-building block communication/integration with external applications and users MUST be through REST APIs.
Though these security requirements are cross-cutting, this document also provides guidance on how to implement core 'Authorization Services' within a GovStack implementation. These services provide the mechanism for user authentication, tracking the specific permissions and roles that a user has and managing access to the various Building Blocks that are consumed by the application. The functions of the Authorization Services include the following:
User authentication
Management of access to Building Block APIs
API Gateway functionality which will manage incoming requests
Identity and Access Management and/or Role-Based Access Control.
These modules are described in Sections 7 and 8 of this document (Authorization Services and Additional Security Modules)
This section links to any external documents that may be relevant, such as standards documents or other descriptions of this Building Block that may be useful.
A historical log of key decisions regarding this Building Block.
A list of topics that may be relevant to future versions of this Specification
A list of links and resources relevant to the Security Specification
In order to enable the delivery of GovStack use cases, a mechanism must be defined that provides the appropriate level of access to the various building blocks by different users and organizations. This includes defining information sharing across service or organizational boundaries, enforcing appropriate roles and permissions for users, allowing user session information to be passed between building blocks, and enforcing secure access between Building Blocks (either co-located within a service, or across applications/organizations using an Information Mediator).
Note: Additional technical detail and example scenarios can be found in this Authentication and Cross-BB Authorization document
Authentication and IAM/Roles/Permissions should be managed by an Authorization Service (functionality extracted from current Security spec) using well-defined standards like OpenID Connect (OIDC).
This service will handle login, defining roles and permissions, and returning a set of roles/permissions for a user session
This service may be implemented as a standalone Building Block or as part of the Application that is controlling the user flow.
After user login and retrieval of roles/permissions for that user
The Application is responsible for managing any outbound calls based on the roles/permissions for that user - whether to internal BBs or cross-organizational requests through IM.
The Application will manage the API keys/credentials needed to access any local BB APIs that it calls
We will not pass user/session information when making API calls to local BBs
We will not pass JWT tokens or roles when making API calls through Information Mediator
JWT/tokens need to be exchanged between different UIs/applications - If we need to pass control over to another app to fill in some information (giving consent, etc) - use OIDC
The Information Mediator is used to facilitate communication between different organizations. The authorization and level of access is defined at the organization level, not the user level (ie. this particular organization has permission to access this set of data)
The security server will hold/manage credentials and API keys that are needed for accessing remote services
An organization can also allow access to an individual user based on a foundational ID
IM is needed when going outside of a local network (ie. across the internet)
Set up an account/set of credentials in the other DPG/application that allows access to the APIs
The authorization services (either within the application or separate functionality) will hold the login credentials in some type of secret storage (.env file or similar mechanism)
When calling an API in the DPG, the Application should first call the DPG login API (there must be an API exposed for login) and then hold the token that is needed for API calls
The Application will pass the appropriate token on subsequent API calls to the DPG
Within a GovStack implementation, an Identity and Access Management (IAM) or Role-Based Access Control (RBAC) system must be in place. This will define the level of access and permissions that a particular user (or group of users) will have within the GovStack system. The requirements for IAM/RBAC are described in Section 8.2 of this security specification
Service: An API or endpoint provided by a Building Block or internal microservice of the Application
Application: One or more Building Blocks that may be combined with a UI and/or internal services which implement some business logic to provide a set of related services.
Use Case/User Journey: A system involving one or more UX components that allows a user to access multiple services which may be co-located or require the use of an information mediator to access a set of services/applications.
Organization: An entity that maintains one or more applications or services that may be consumed by other organizations
User: An individual that is accessing a particular application or set of services
Information Mediator: Connects applications across the internet, allowing services that are owned by different organizations to share data securely and using agreed-upon rules
Foundational Identity: a unique identifier for an entity that corresponds with a national identification system
Functional Identity: a login or user account that is associated with a particular application or set of services
Session: a set of information (or token) that provides context to a particular user interaction with an application (ie. keeping track of a logged-in user and what they have permission to do while they are logged in for a particular period of time)
Term or Acronym | Meaning and Expansion | Comments and Links |
Access | A general term that describes the granting and restriction of access to resources for subjects. |
Authentication | The validation of user credentials for the purpose of system login and basic access. | Authentication is the process of recognizing a user’s identity. |
Authorization | The granting of privileges or rights for accessing the various resources hosted by a system, to a subject via a role or group for example. | Authorization is the process of giving someone permission to do or have something.
|
CIS | The Center for Internet Security (CIS) benchmarks are a set of best-practice cybersecurity standards for a range of IT systems and products. CIS Benchmarks provide the baseline configurations to ensure compliance with industry-agreed cybersecurity standards. | CIS is an independent nonprofit organization with a mission to create a confidence in a connected world.
|
CSPM | Cloud Security Posture Management is a solution suite that enables administrators to keep track of the way in which both home grown and 3rd party services and applications access public cloud provider resources from a security perspective and enables vulnerabilities to be resolved. | CSPM is a market segment for IT security tools that are designed to identify misconfiguration issues and compliance risks in the cloud.
|
CUI | Confidential Unclassified Information as defined by NIST 800-171 Rev 2 | Controlled Unclassified Information (CUI) is information that requires safeguarding or dissemination controls consistent with applicable laws, regulations, and Government-wide policies.
|
CVE | Common Vulnerabilities and Exposures - a known vulnerability in a system or network component which can be exploited by a malicious attacker to gain access or create havoc. | CVE, short for Common Vulnerabilities and Exposures, is a list of publicly disclosed computer security flaws.
|
DevOps and DevSecOps | A set of principles and practices used along with tools that fully integrates and expedites the process of building, securing and deploying code on a scheduled and/or demand basis with the goals of reduced errors, reduced time-to-market, increased security and increased accuracy among others. | DevOps focuses on collaboration between application teams throughout the app development and deployment process. DevSecOps evolved from DevOps as development teams began to realize that the DevOps model didn’t adequately address security concerns.
|
DLP | Data Leakage Prevention - a solution typically used to prevent confidential or private information from leaking outside the organization to unauthorized 3rd parties. | Data loss prevention (DLP) is a set of tools and processes used to ensure that sensitive data is not lost, misused, or accessed by unauthorized users.
|
Federation | Federated security allows for clean separation between the service a client is accessing and the associated authentication and authorization procedures. Federated security also enables collaboration across multiple systems, networks, and organizations in different trust realms. | Federated identity is a method of linking a user's identity across multiple separate identity management systems.
|
GLBA | The Gramm-Leach-Bliley Act (GLB Act or GLBA) is also known as the Financial Modernization Act of 1999. It is a United States federal law that requires financial institutions to explain how they share and protect their customers' private information. It is also a generally accepted global standard. | The Gramm-Leach-Bliley Act requires financial institutions – companies that offer consumers financial products or services like loans, financial or investment advice, or insurance – to explain their information-sharing practices to their customers and to safeguard sensitive data.
|
HIPAA | Established United States federal standard to protect individuals' medical records and other personal health information and applies to health plans, health care clearinghouses, and those health care providers that conduct certain health care transactions electronically. It is a generally accepted standard globally. | The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law that requires the creation of national standards to protect sensitive patient health information from being disclosed without the patient’s consent or knowledge. |
IAM | Identity and Access Management - typically refers to a security suite that implements the infrastructure required for Authentication and Authorization plus the management of identities, roles, groups and access. | Identity and access management (IAM) is the discipline that enables the right individuals to access the right resources at the right times for the right reasons. IAM addresses the mission-critical need to ensure appropriate access to resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance requirements
|
IMAP | Internet Message Access Protocol is a mail client. protocol used for retrieval of email messages from a mail server. For the purposes of this document IMAP refers to IMAP4 which is defined by the IETF with multiple RFCs. | Internet Message Access Protocol (IMAP) is a protocol for accessing email or bulletin board messages from a (possibly shared) mail server or service.
|
OAuth2 | An open standards based protocol used for Authentication that uses bearer tokens and is specifically designed to work across HTTP. OAuth provides clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without providing credentials. OAuth2 is the second major release of OAuth which has been hardened based on known attacks such as “AS MixUp”. Not all implementations of OAuth2 are equal and some have been found to have security flaws. | The OAuth (open authorization) protocol was developed by the Internet Engineering Task Force and enables secure delegated access.
|
OpenIDConnect | A simple open standards based identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of a party based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the party in an interoperable and REST-like manner | OpenID Connect lets developers authenticate their users across websites and apps without having to own and manage password files.
|
OWASP | The Open Web Application Security Project is an online community that produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security. | The Open Web Application Security Project, or OWASP, is an international non-profit organization dedicated to web application security. |
PaaS | Platform As A Service: A suite of software components that is fully integrated to provide a secure, convenient and rapid application development and deployment platform for cloud style applications. | PaaS (Platform as a Service), as the name suggests, provides you computing platforms which typically includes operating system, programming language execution environment, database, web server
|
PCI DSS | A set of standards used by the payment card industry to secure payment card data and card holder information including primary account numbers (PAN), credit/debit card numbers, and sensitive authentication data (SAD) such as CVVs and PINs. | The Payment Card Industry Data Security Standard (PCI DSS) is required by the contract for those handling cardholder data, whether you are a start-up or a global enterprise.
|
POP | Post Office Protocol - a standard email protocol used by clients to access email once delivered to a mail server in a specific DNS domain. Various versions of this protocol exist but for the purposes of this document POP refers to POP3 as defined by RFC1939 and the extension mechanism in RFC2449 and an authentication mechanism defined in RFC1734 | The post office protocol (POP) is the most commonly used message request protocol in the Internet world for transferring messages from an e-mail server to an email client.
|
Provisioning | A way of propagating the joining or leaving of users from the system and creating/removing the accounts and access rights for users based on their target profile/role. | In general, provisioning means "providing" or making something available. In a storage area network (SAN), storage provisioning is the process of assigning storage to optimize performance. In telecommunications terminology, provisioning means providing a product or service, such as wiring or bandwidth.
|
Realm | A realm is a security policy domain defined for a web or application server. A realm contains a collection of users, who may or may not be assigned to a group. An application will often prompt for a username and password before allowing access to a protected resource. Access for realms can be federated. | A realm is a security policy domain defined for a web or application server. The protected resources on a server can be partitioned into a set of protection spaces, each with its own authentication scheme and/or authorization database containing a collection of users and groups.
|
SAML | Security Assertion Markup Language. SAML and SAML2 are XML markup protocols (a suite of XMLSchema message types) designed for federation of identities across identity providers and service providers. Its main use case is for web single-sign-on. | Security Assertion Markup Language (SAML) is an open standard for sharing security information about identity, authentication and authorization across.
|
SCEP | Simple Certificate Enrolment Protocol used to enroll users and issue digital certificates. Typically supported by the certificate authority server. | Simple Certificate Enrollment Protocol (SCEP) is an open source protocol that is widely used to make digital certificate issuance at large organizations easier, more secure, and scalable.
|
Single Sign On (SSO) | A way of ensuring that users only need to enter credentials once in order to gain policy access to resources across security realms. | Single sign-on (SSO) is an authentication method that enables users to securely authenticate with multiple applications and websites by using just one set of credentials.
|
SMTP | Simple Mail Transfer Protocol - a protocol used to route email between gateways to the server responsible for final delivery to a specific DNS mail domain. | The Simple Mail Transfer Protocol (SMTP) is used to deliver e-mail messages over the Internet. This protocol is used by most email clients to deliver messages to the server, and is also used by servers to forward messages to their final destination.
|
Subject | In a security context, a subject is any entity that requests access to an object. These are generic terms used to denote the thing requesting access and the thing the request is made against. When you log onto an application you are the subject and the application is the object | The term subject to represent the source of a request. A subject may be any entity, such as a person or service. A subject is represented by the javax. security. auth.
|
XACML | eXtensible Access Control Markup Language The XACML standard defines a declarative fine-grained, attribute-based access control policy language, an architecture, and a processing model describing how to evaluate access requests according to the rules defined in policies all in XMLSchema. | XACML (Extensible Access Control Markup Language) is an open standard XML-based language used to express security policies and access rights to information.
|
Note that all of the requirements stipulated in this document and its references are reciprocal in that they also apply to components such as the API Management and Gateway services implemented by the security building block. For example the API Management and Gateway services deployed by this building block MUST also address their own intrusion prevention and detection needs referencing the solution requirements defined by this document.
The requirements stipulated in this document are themselves cross-cutting in that they apply to all building blocks and MUST be cross-referenced by the Building Block Definitions for each building block in the Cross-cutting requirements section.
Having these cross-cutting requirements defined centrally in this document and its references removes the issues of inconsistent, insufficient, costly and repetitive security implementation across all building blocks.
The cross-cutting requirements described in this document, its references and this section are an extension of the high level cross-cutting requirements defined in the architecture specification document and intended to specifically define the security requirements for the whole GovStack architecture in all layers.
This section describes the additional cross-cutting requirements that apply to the security building block as well as cross-cutting security requirements for ALL other building blocks. Note that cross-cutting requirements defined here use the same language (MUST or SHOULD) as specified in the architecture blueprint document (see Ref 1).
Personal data MUST be kept private and never shared with any parties, except where specific authorization has been granted. This also applies to all acquired security components as they will often be logging personal data along with the transactional records. That logging data must also be considered private. Where CUI (Controlled Unclassified Information) is dealt with, the NIST 800-171 Rev 2 standard shall be applied (see Ref 3)
Must refer reciprocally to this document and its references.
Security requirement is a condition over the phenomenon of the environment that we wish to make true by installing the system in order to mitigate risks. A requirement defining what level of security is expected from the system with respect to some type of threat or malicious attack.
5.10 Virus, Ransomware, Malware, Spam, Phishing Protection Requirements\
See the section of this document dealing with OSINT tools
The resource model shows the relationship between data objects that are used by this Building Block. The following resource model depicts the basic elements of identity and access management (IAM) solutions required organized into domains:
The data elements provide detail for the resource model defined above. This section will list the core/required fields for each resource. Note that the data elements can be extended for a particular use case, but they must always contain at least the fields defined here. Information about data elements will include:
Name
Description
Data Type
Required/Optional flag
Link to applicable standard(s)
Notes
The following is a minimal example of how OpenIAM implements REST based authentication using its REST API:
(Note: The APIs will need to include appropriate request and response version numbers, for example, see https://docs.google.com/document/d/12b696fHlOAAHygFF5-XxUJkFyFjMIV99VDKZTXnnAkg/edit#heading=h.h9ypjkyetr1i)
URL
/idp/rest/api/v1/auth/public/login
Method
POST
Request Parameters
login: user login (optional)
password: user password (optional)
postbackURL: redirectURL after success login (optional)
Headers
Content-Type:application/x-www-form-urlencoded
cURL Example
curl 'http://127.0.0.1:8080/idp/rest/api/auth/public/login' -X POST --data 'login=admin&password=pass123456'
Success Response Example
Error Response Example
The following is a minimal example of how OpenIAM implements authentication with OAuth2 by requesting an OAuth2 token:
URL
/idp/oauth2/token
Method
POST
Request Parameters
client_secret: Value of the client secret from OAuth client configuration page
client_id: Value of the client ID from OAuth client configuration page
grant_type: Type of grant flow
username: Login of requester
password: Password of requester
Headers
Content-Type:application/x-www-form-urlencoded
cURL Example
Error Response Example
The following is a minimal example of how OpenIAM implements authentication token renewal:
URL
/idp/rest/api/auth/renewToken
Method
GET
Headers
Authorization: with oAuth token in format 'Bearer: <token>'
Cookie: with current valid authentication token in format 'OPENIAM_AUTH_TOKEN=<token>'
cURL Example
Success Response Example
Error Response Example
The following is a minimal example of how OpenIAM implements authorization using OAuth2:
URL
{server_url}/idp/oauth2/token/authorize
Replace {server_url} with the name of the server.
Method
GET
Parameters
response_type: codetoken
client_id: webconsole/Access Control/Authentication Providers/*needed provider* edit/ Client ID field
redirect_uri: webconsole/Access Control/Authentication Providers/*needed provider* / Redirect Url. Use 'Space' or 'Enter' to separate values field.
cURL Example
curl
'http://dev1.openiamdemo.com:8080/idp/oauth2/authorize?response_type=code&client_id=EF4128DCC0D24ED3BAC17FC918FDDBF5&redirect_uri=http://dev1.openiamdemo.com:8080/oauthhandler'
or, just:
curl
'http://dev1.openiamdemo.com:8080/idp/oauth2/authorize?response_type=code&client_id=EF4128DCC0D24ED3BAC17FC918FDDBF5&redirect_uri=http://dev1.openiamdemo.com:8080/oauthhandler'
Success Response Example
redirect to redirect_uri?code=code
Error Response Example
redirect to redirect_uri with error
The following is a minimal example of how OpenIAM implements an implicit grant flow API style of authentication:
URL
{server_url}/idp/oauth2/token/authorize
Replace {server_url} with the name of the server.
Method
GET
Parameters
response_type: token
client_id: webconsole/Access Control/Authentification Providers/*needed provider* edit/ Client ID field
redirect_uri: webconsole/Access Control/Authentification Providers/*needed provider* / Redirect Url. Use 'Space' or 'Enter' to separate values field.
cURL Example
curl -v XGET 'http://dev1.openiamdemo.com:8080/idp/oauth2/authorize?response_type=token&client_id=EF4128DCC0D24ED3BAC17FC918FDDBF5&redirect_uri=http://dev1.openiamdemo.com:8080/oauthhandler'
or, just:
curl 'http://dev1.openiamdemo.com:8080/idp/oauth2/authorize?response_type=token&client_id=EF4128DCC0D24ED3BAC17FC918FDDBF5&redirect_uri=http://dev1.openiamdemo.com:8080/oauthhandler'
Success Response Example
redirect to redirect_uri?code=access_token=Pcej-9OdU_wshAjTn76MP-Cj5OgY_sfdYrt&expires_in=60000&token_type=Bearer
Error Response Example
redirect to redirect_uri with error
The following is a minimal example of how OpenIAM implements a get operation for token information:
URL
{server_url}/idp/oauth2/token/info
Replace {server_url} with the name of the server.
Method
GET
Parameters
token: token should be created via Create token.
cURL Example
curl -v XGET
'http://dev1.openiamdemo.com:8080/idp/oauth2/token/info?token=rdSOyor6hqJ2CrQ5QrpeXgX.ItgVEx1.nskN'
or, just:
curl
'http://dev1.openiamdemo.com:8080/idp/oauth2/token/info?token=rdSOyor6hqJ2CrQ5QrpeXgX.ItgVEx1.nskN'
Success Response Example
Error response example
The following is a minimal example of how OpenIAM would create an OAuth2 token using authorization grant flow style:
URL
{server_url}/idp/oauth2/token
Replace {server_url} with the name of the server.
Method
POST
Parameters
client_secret: webconsole/Access Control/Authentification Providers/*needed provider* edit/ Client Secret field
client_id: webconsole/Access Control/Authentification Providers/*needed provider* edit/ Client ID field
grant_type: authorization_code
redirect_uri: webconsole/Access Control/Authentification Providers/*needed provider* / Redirect Url. Use 'Space' or 'Enter' to separate values field
code: code should be generated with Authorization code grant flow request.
Headers
Content-Type: application/x-www-form-urlencoded
cURL Example
curl -v -XPOST --data 'client_secret=client_secret&client_id=client_id&grant_type=authorization_code&redirect_uri=redirect_uri&code=code' 'http://dev1.openiamdemo.com:8080/idp/oauth2/token'
Success Response Example
Error Response Example
The following is a minimal example of how OpenIAM implements OAuth2 token revocation:
URL
{server_url}/idp/oauth2/token/revoke
Replace {server_url} with the name of the server.
Method
POST
Headers
Content-Type=application/x-www-form-urlencoded
Parameters
token: token should be created via Create token.
cURL Example
curl -v XPOST --data 'token=token' 'http://dev1.openiamdemo.com:8080/idp/oauth2/revoke'
Success Response Example
Error Response Example
The following is a minimal example of how OpenIAM implmenents OAuth2 token validation:
URL
{server_url}/idp/oauth2/token/validate
Replace {server_url} with the name of the server.
Method
GET
Parameters
token: token should be created via Create token.
cURL Example
curl -v XPOST --data 'refreesh_token=refreesh_token' 'http://dev1.openiamdemo.com:8080/idp/oauth2/token/refresh'
Success Response Example
Error response example
The following is a minimal example of how user information can be obtained from OpenIAM using an OAuth2 token:
URL
{server_url}/idp/oauth2/userinfo
Replace {server_url} with the name of the server.
Method
GET
Parameters
token: token should be created via Create token.
cURL Example
curl -v XGET 'http://dev1.openiamdemo.com:8080/idp/oauth2/userinfo?token=rdSOyor6hqJ2CrQ5QrpeXgX.ItgVEx1.nskN'
or, just:
curl 'http://dev1.openiamdemo.com:8080/idp/oauth2/userinfo?token=rdSOyor6hqJ2CrQ5QrpeXgX.ItgVEx1.nskN'
Success Response Example
Error Response Example
The most comprehensive API available for this is delivered by OpenIAM. Unfortunately this API is currently delivered in SOAP. The purpose of this API is to provide 3rd parties the ability to create resources, roles and access within the IAM system. There are multiple options to get this done including batch upload and configuration using the administrative user interface. This would need to be addressed at implementation time using the most practical means. There does not seem to be a current use case for the BB’s to create these types of resources on the fly using the IAM API. The API definitions can be found here: https://docs.openiam.com/docs-5.1.14/html/docs.htm#API/SOAP/SOAP.htm%3FTocPath%3DAPI%2520Guide%7CPart%2520II%253A%2520SOAP%2520API%2520integration%2520services%7C_____0
Developed by Laurence Berry (), Betty Mwema (), Dr. P. S. Ramkumar ()
Design standards, guidance, and patterns for designing services using GovStack Building Blocks.
This document has been developed as guidance to kick-start the design and development of services that use and combine GovStack applications and Building Blocks, as well as other components while maintaining a seamless and consistent user experience.
This guidance supports teams in identifying and implementing the foundations for designing user-centered, accessible, consistent, and technically robust services. Intended to help teams align to the and the .
Specifications for how to implement accessible, responsive, multi-modal Building Blocks and provide a consistent service.
Guidelines for designing interfaces (like meeting ).
Screen flows for common user journeys (like registration).
Guidance on technical choices (like how to design for low bandwidth, high latency environments, unreliable connectivity, local storage, local persistence of data security using DOMs, etc.).
Patterns for managing client-side validation.
The guidelines act as a template checklist for assuring the quality of a service's design and delivery. Each point in the guideline has/links to additional guidance.
We chose to define high-level service patterns rather than anything more specific like a design system or user interface components, this is to maintain flexibility to work around each organization's needs and existing design assets and front-end frameworks.
This section serves as a checklist for assessing quality of a service by following the points in this guide
The version history table describes the major changes to the specifications between published versions.
Version | Authors | Comment |
---|
To a or to use a such as the.
5.5.1 Enrollment Services
Enrollment services for a digital ID in the form of a certificate using the physical credentials of the enrollee (a human citizen subject) and the process of the Identity BB (see the functional requirements for Identity in the Identity BB Definition). A feature for invalidating, locking or disenrollment/revocation of the digital ID shall also be provided as a response measure to both human citizen subjects leaving the system and responding to security breaches encountered. Digital certificate enrollment must be provided by the solution but is not required for every human citizen subject (see below).
Note that it is anticipated that the Identity BB will call this feature either directly via API or indirectly via the IAM features of the Security BB for users electing to use a digital ID consisting of certificates as a part of the account provisioning process. The digital ID will then be stored with the physical ID records in the identity BB and sent to the new user via secure means (probably installed on their device).
Note that simple numerical digital IDs will also be supported for human citizen subjects as an option where users are unable to leverage certificates based digital ID. The requirements governing this are to be stipulated by the Identity BB (see the Identity BB Definition) .
Note that 3rd party organization and internal subjects (both human and non-human) MUST be issued valid signed digital certificates in order to establish and maintain secure inter-organization and internal communications.
REQUIRED
5.5.2 Multi-Factor Authentication
The overall solution suite shall also be able to implement multi-factor authentication using simple numeric digital IDs for human citizen subjects such as their tax file or social security number of the user.
A selection of various alternatives for digital ID is required in order to cater for more or less digitally-savvy citizens. Various token types are also required to be optimally supported such as HOTP and TOTP tokens, SMS, email, push notifications, SSH keys, X.509 certificates, Yubikeys, Nitrokeys, U2F and WebAuthn. Vendors of solutions SHOULD articulate the benefits of what they propose in their solution.
Note that multi-factor authentication must be able to be implemented for both external and internal subjects (people, systems, components etc.) but is not necessarily required for internal non-human subjects (such as building block components) as they communicate via the information mediator BB (see the InfoMed BB Definition).
REQUIRED
5.5.3 Numerical Digital ID Attribute
Where human citizen subjects adopt the use of a simple numerical digital ID, the multi-factor authentication process MUST include a time-sensitive credential (AKA OTP or one time PIN)
REQUIRED
5.6.1 Strong Authentication and Cryptography
The basic security service requirements of high confidentiality, high integrity, strong authentication, strong cryptography and absolute non-repudiation must be delivered by the solution. The vendor must articulate how these needs are met with the proposed solution suite to ensure that GovStack is able to deliver the same consistent security service experience
REQUIRED
5.6.2 Standards Based Certificate Authority Server
A certificate authority (CA), or certifier with fully configured server infrastructure, is required to implement trusted administration tools that issue signed digital certificates to citizens and other 3rd parties then maintain the lifecycle of those digital certificates
Digital certificates MUST comply with the IETF, ISO/IEC/ITU-T X.509 Version 3 and PKIX (note that PKIX is the registration authority role, which allows administrators to delegate the certificate approval/denial process) standards as defined by RFC5280 and associated standards. Digital certificates are to be issued on behalf of the appropriate government authority where GovStack is to be deployed.
REQUIRED
5.6.3 Certificate Issuance
Issued signed digital certificates MUST verify the identity of an individual, a server, or an organization and allow them to use SSL to communicate and to use S/MIME to exchange mail as well as sign documents and transactions in a non-repudiable manner.
REQUIRED
5.6.4 Digital Signatures
Certificates issued by the authority MUST be stamped with the certifier's digital signature (i.e. signed), which assures the recipients of the certificate that the bearer of the certificate is the entity named in the certificate.
REQUIRED
5.6.5 Private Certifier Capability
The solution provided MUST be able to be set up as a certifier to avoid the expenses that a third-party certifier charges to issue and renew client and server certificates. In other words the solution can operate without a 3rd party certifier. This makes it easier, cheaper and quicker to set up and deploy new certificates as needed and at scale. Certificate validation will not be required to access a 3rd party certifier for validation.
REQUIRED
5.6.6 Revocation Lists Support
The certificate server MUST be able to support certificate revocation lists (CRLs), which contain information about revoked or expired Internet certificates.
REQUIRED
5.6.7 Flexible Certificate Authority Hierarchy
The certificate authority server infrastructure MUST enable CA administrators to create a flexible private CA hierarchy, including root and subordinate CAs, with no need for external CAs.
Private CA hierarchies must be able to be built in a hybrid mode, combining online and on-premises CAs with cloud based CAs.
REQUIRED
5.6.8 Web Based Admin Interface
The certificate authority server infrastructure MUST provide a comprehensive web based administrator user interface so that all of the GovStack certificate issuance and revocation features and functions can be configured and managed from a single central window.
REQUIRED
5.6.9 Standards Based API Interface
The certificate authority server infrastructure must provide a secure API interface that supports calls for the issuance and revocation of certificates by other GovStack components such as the Identity Building Block. This API must comply with the same OpenAPI standards defined in the Architecture Blueprint (see Ref 1)
REQUIRED
5.6.10 High Availability
The certificate authority server and its infrastructure must be configurable for highly available implementation (see the non-functional definitions for high availability). For example this means clustering and failover of certificate authority services and associated data sources to provide a 24x7x365 service with 99.99% availability (AKA 4 nines)..
REQUIRED
5.6.11 Horizontal Scalability
The certificate authority infrastrastructure must be horizontally scalable on commodity hardware to ensure that the scalability needs of low resource countries with large populations deploying GovStack can be met without incurring significant or untenable costs.
REQUIRED
5.6.12 Advanced Encryption Methods Support
The solution provided MUST support advanced encryption techniques such as ECC (Elliptic Curve Cryptography) which gives certificates an additional security/performance advantage vs use of the traditional RSA cryptography system for example. Other techniques may be acceptable and the vendor must explain and justify why they are superior.
REQUIRED
5.6.13 Enrollment via API Interface (SCEP)
The solution MUST provide a means of enrollment via a standardized API interface which SHOULD be based on SCEP. A description of the OpenXPKI enrollment workflow and API can be found here: https://openxpki.readthedocs.io/en/latest/reference/configuration/workflows/enroll.html. This is an example only and can vary based on the proposed implementation.
REQUIRED
5.6.14 Security Provisions
The certificate authority deployment scheme must be exposed to the public internet and protected securely in accordance with all of the other security requirements and provisions described in this document. The reason for this is to allow 3rd parties to verify the authenticity of certificates issued by the govts deploying GovStack.
REQUIRED
5.7.1 Centralized Credential Store
The GovStack security solution requires a credential store as a centralized infrastructure for hosting the user account and credentials defined such that the IAM solution and other components such as the API Management and Gateway solutions can leverage them. This may end up being embedded in other solutions such as IAM or potentially implemented as a separate repository such as LDAP.
REQUIRED
5.7.2 Web Based Admin Interface
The solution provided as a credential store MUST have a comprehensive web based administrative interface that allows administrators to make any necessary configuration changes and modify credentials for subjects as needed.
REQUIRED
5.7.3 High Availability
The solution provided as a credential store and associated components providing access to the store MUST be highly available and utilize clustering technology in order to provide a minimum of 24x7x365 service with 99.99% availability (AKA 4 9’s).
REQUIRED
5.7.4 Standards Based REST API
The solution provided MUST include a standard API for storage and access to credentials such as the standard REST LDAP API provided by the open source 389 Directory Server here: https://directory.fedoraproject.org/docs/389ds/design/ldap-rest-api.html#ldap-rest-api . Note that this is purely an example and may vary based on the solution proposed.
REQUIRED
5.7.5 Standard Connectors and Adapters
The solution provided as a credential store must be fully integratable with the other security solution components through standards based protocols and out-of-the-box adapters for the specific product offered.
REQUIRE
5.8.1 OTP and Multifactor Capability
The solution must provide the ability to generate and utilize time sensitive credentials in various forms for the purposes of securing user authentication with multiple factors using non-PKI credentials (see the section of digital ID in this document).
REQUIRED
5.8.2 Multiple OTP Methods
Multiple methods SHOULD be provided for the implementation of time-sensitive OTP using potentially push or device level sources.
Note that vendors SHOULD articulate the benefits of their technology and approach to implementing time sensitive credentials and align their recommendations to the needs of resource limited settings.
OPTIONAL
5.8.3 High Availability
The solution provided as an OTP server and any associated components MUST be highly available and utilize clustering technology in order to provide a minimum of 24x7x365 service with 99.99% availability (AKA 4 9’s).
REQUIRED
REQUIRED
5.8.5 Standards Based REST API
The offered solution MUST provide a REST API for managing OTP similar to the following: https://www.miniorange.com/step-by-step-guide-to-set-up-otp-verification. This is just an example and can vary with the proposed implementation.
REQUIRED
5.9.1 Network Policy Definition and Scanning
The solution offered MUST provide advanced policy definition capabilities along with the ability to scan entire networks and subnetworks with one click and the ability to automate scanning on a regular schedule.
REQUIRED
5.9.2 Broad Suite of CVE Scan Coverage
The solution MUST provide the broadest possible suite of CVE (known vulnerability) scans across common ports for common software services and the like.
REQUIRED
5.9.3 Regular CVE Pattern Updates
The solution must provide regular updates and new plugins for emerging CVEs within a short timeframe of the CVE becoming known.
REQUIRED
5.9.4 List of Top Threats and Remediations
The solution MUST be able to assemble lists of top threats from scans, based on VPR and provide recommendations on which vulnerabilities pose the greatest risk in order to prioritize remediation efforts.
REQUIRED
5.9.5 Broad Range of Preconfigured Templates
The solution MUST provide preconfigured templates out-of-the-box for a broad range of IT and mobile assets. These must support everything from configuration audits to patch management effectiveness which helps quickly understand where vulnerabilities exist and assess audit configuration compliance against CIS benchmarks and other best practices.
REQUIRED
5.9.6 Customizable Views
The solution MUST provide the ability to easily create reports based on customized views, including specific vulnerability types, vulnerabilities by host or by plugin. MUST be able to create reports in a variety of formats (such as HTML, csv and XML) and then easily tailor and email reports to stakeholders with every scan.
REQUIRED
5.9.7 Associative Remediation
The solution SHOULD provide associative remediation via security patch automation using automation tools so that hundreds or even thousands of specific vulnerability instances can be addressed across the whole infrastructure.
OPTIONAL
5.9.8 Standards Compliance Management
The solution MUST provide the general ability to implement vulnerability management processes that drive compliance with PCI, HIPAA, GLBA, CIS, NIST and or similar European or African continental standards.
REQUIRED
5.9.9 Scalability
The offered solution MUST be able to scan whole large networks of computers with thousands of open ports and services within an acceptable time frame (the usual maintenance window). The vendor is to explain the scaling strategy and how it can be used to address a significant eGovernment infrastructure that serves millions of citizens..
REQUIRED
5.10.1 Transparent Gateway Service
The solution MUST provide an anti-spam mail gateway that transparently operates within the email routing infrastructure using standards based protocols such as SMTP, POP and IMAP.
REQUIRED
5.10.2 Common Vulnerability Scans
The solution must be able to scan emails for various types of malware and for spam, phishing, and other types of malware common attacks that target known system infrastructure vulnerabilities. The solution must be extensible (perhaps by plugin) to support emerging threats, new vulnerabilities and new services.
REQUIRED
5.10.3 Cloud or On-Premise Deployment
The solution MUST provide a cloud-based or on-premise based pre-perimeter defense against spam, phishing emails and virus-infected attachments.
REQUIRED
5.10.4 3rd Party Virus Checker Support
The solution provided MUST support a wide range of 3rd party and open source virus checker software which is independent of the mail scanner module.
REQUIRED
5.10.5 Wide Range of Filtering Approaches
The solution provided MUST support a wide range of filtering approaches and analytic tests such as text analysis, DNS blacklists, collaborative filtering databases and Bayesian filtering.
REQUIRED
5.10.6 Common Infrastructure Deployments
The solution MUST be deployable within common open source mail server infrastructures such as procmail, qmail, Postfix, and sendmail
REQUIRED
5.10.7 Ability to Integrate in Multiple Points
The mail scanning modules of the solution MUST be able to be integrated at any place in the email stream.
REQUIRED
5.10.8 Multiple Analytic Techniques
The solution MUST provide multiple analytic techniques, as well as in-depth human expertise, to score incoming email attachments as good, bad, or unknown
REQUIRED
5.10.9 Attachment Containment
The solution MUST run unknown attachment files in containment which is a completely virtual environment and isolated from other network segments.
REQUIRED
5.10.10 Day Zero Infection Prevention
The solution MUST provide the ability to protect from “day zero” infections by rapidly responding with automated updates to counter newly identified threats and applying pattern based algorithms to detect new threats before they infiltrate systems.
Note: vendor to explain the value proposition of what they offer in this area.
REQUIRED
5.11.1 Application Layer Protection
The proposed solution MUST protect against application layer DDOS attack: Application Layer DDOS attack is a type of DDOS attack which targets the application layer of OSI model (i.e. the protocols that interface software modules such as POP, IMAP, SMTP, HTTP etc.). The size of these attacks are typically measured in requests per second (RPS) and limits must be configurable for both the singular IP addresses and subnets from which such traffic originates.
REQUIRED
5.11.2 Protocol Style Attack Prevention
The proposed solution MUST protect against Protocol style DDOS attacks: Protocol style DDOS attacks target server resources rather than bandwidth through saturation of requests synch as TCP SYN for connection attempts and general UDP frames in order to render the target services useless for users. The size of these attacks are typically measured in protocol frames per second (PFPS) and limits must be configurable for both the singular IP addresses and subnets from which such traffic originates.
REQUIRED
5.11.3 Volume Based Attack Prevention
The proposed solution MUST protect against volume based DDOS attack: Volume based DDOS attack uses a variety of different techniques to saturate bandwidth of the attacked site, so other visitors cannot access it. It eventually leads the server to crash due to traffic saturation. There are three ways the solution MUST defend against volume based DDOS: 1) Attack Prevention and Preemption: this is before the attack based on detection of patterns. 2) Attack Detection and Filtering: This is performed during the attack and packets are filtered or dropped in order to preserve system integrity and 3) Attack Source Blacklisting: This can be performed during and after the attack.
REQUIRED
5.11.4 White Lists
The proposed solution MUST support a white-list of addresses that will always be passed to the servers. This is to facilitate normal usr operations and reduce the likelihood of false positives.
REQUIRED
5.11.5 Automatic Unblocking
Blacklisted or blocked IP addresses MUST be able to be automatically unblocked by the solution after a configurable timeout period.
REQUIRED
5.11.6 Remediation Script Hooks
The solution MUST provide the ability to run hooks for remediation scripts at event edges or at defined intervals for the purpose of cleaning up server resources and ensuring system stability etc.
REQUIRED
5.11.7 Alerting Framework
The solution MUST provide an alerting framework (via email and/or instant messaging) when IP addresses are blocked so that administrators are kept abreast of potential attacks and can begin monitoring activity more closely with a view to manually intervene if necessary.
REQUIRED
5.11.8 Standard Firewall Integration
The solution MUST support and integrate with typical Linux firewall technologies such as APF (advanced policy firewall), CSF (config server firewall) and standard Linux iptables having the ability to insert and adjust rules into the firewall on the fly to cater for attack responses and remediation.
REQUIRED
5.11.9 TCP Kill
The solution MUST provide the ability to kill TCP request processes upon encountering flooding in order to preserve the integrity of the protocol stack and return the system to a normal state where it is able to process TCP requests.
REQUIRED
5.12.1 OWASP Compliance
The custom developed applications solutions MUST take steps to protect against the top OWASP vulnerabilities such as XSS for example. Development processes MUST implement automated tooling to check code for such vulnerabilities.
REQUIRED
5.12.2 OWASP Source Code Scans
The solution provided MUST be able to scan source code committed to repositories by developers to identify and remediate the top OWASP vulnerabilities. Security hotspots pertain to the implementation of security sensitive code. Detection and human review using developer workflow is required to ensure that defects pertaining to security hotspots do not find their way into production code..
As developers code and consequently deal with security hotspots, they MUST also be able to learn how to evaluate security risks by example and error identification whilst continuously adopting better secure coding practices. The tooling provided for this MUST enable such a scenario to take place to drive continuous developer security improvement.
Note that this pertains to custom coding for applications and components developed by all building blocks and not to the code behind 3rd party components and applications to which GovStack must be integrated.
REQUIRED
5.12.3 Support for Common Programming Languages
The solution provided should support common programming languages used in the enterprise such as Java, JavaScript, C, C++, C#, Python, Scala, Kotlin, Golang and PHP.
OPTIONAL
5.12.4 Detection and Remediation of Top 10 Vulnerabilities
The solution provided must minimally support the detection and remediation of the following types of security vulnerabilities (based on the OWASP Top 10 for 2021)::
Injection (all types)
Broken authentication
Sensitive data exposure
XML External Entities (XEE)
Broken access control
Security misconfiguration
Cross site scripting (XSS)
Insecure object deserialization
Libraries/components with known vulnerabilities
Lack of logging and monitoring
Generally poor coding practices in memory management etc.
REQUIRED
5.12.5 Common Developer Tools and Frameworks
The solution provided MUST integrate with common developer tools and frameworks as well as source code control systems such as GIT, SVN etc. and Jira for full cycle issue management.
REQUIRED
Functional Requirement
Type (Must/
Should/
May)
5.13.1 Container Scanning Features
The solution for containers (Docker and OCI) is presumed to be the main infrastructure layer for the project and it is this layer that requires protection from common vulnerabilities and exposures (CVE). The solution for containers MUST provide scanning tools to scan the content of deployed containers for known vulnerabilities with a view to reduce the attack surface for attackers.
REQUIRED
5.13.2 Fully Integrated DevSecOps
The solution for containers MUST have a fully integrated DevSecOps approach for CI/CD (continuous integration and continuous deployment) that prevents containers with known vulnerabilities from being deployed and enables patches for known CVE to be deployed both inside the container and to the container orchestration layer and its associated components (AKA PaaS).
REQUIRED
5.13.3 Automatic Infrastructure Update
The solution for containers MUST address the problem of automatically updating the infrastructure on a regular and/or demand basis to apply security patches for known vulnerabilities as soon as they are available. The goal is to reduce the window of vulnerability for new CVE’s that are discovered. This is particularly in consideration of historical vulnerabilities that impacted hardware such as Spectre which impacted over 40M computers worldwide.
REQUIRED
5.13.4 FIPS 140-2 and ECC Certifications
The solution for container orchestration and its associated platform infrastructure MUST be certified as compliant with known security standards such as NIST FIPS 140-2 and the European Common Criteria certifications.
REQUIRED
Requirement
Type (Must/ Should/ May)
5.15.1 Multi-Channel Detection and Prevention
The solution MUST provide a multi-channel means of detecting and preventing data leakage for critical and private data over web, email, instant messaging, file transfer, removable storage devices and printers and any other file transfer means.
REQUIRED
5.15.2 Fully Controlled Endpoint Protection
The solution MUST provide endpoint protection for data in use on systems that run on internal end-user workstations or servers. Endpoint-based technology MUST address internal as well as external communications. Endpoint technology MUST be used to control information flow between groups or types of users. It MUST also control email and instant messaging communications before they reach the corporate archive and block communication/consumption/transmission/forwarding of critical and sensitive data.
The solution MUST monitor and control access to physical devices (such as mobile devices with data storage capabilities - best to restrict mobile access) and restrict access information before it is encrypted (either in situ or in transit). The solution MUST provide the ability to implement contextual information classification (for example identifying what CUI is, or buy identification of the source or author generating content.
The solution MUST provide application controls to block attempted transmissions of confidential information and provide immediate user feedback with logging and alerts to prevent or intercept future attempts through other channels. The endpoint solution MUST be installed on every workstation (laptop and mobile having access also) in the network (typically via a DLP Agent). Typically it pays to ensure that mobile devices are restricted from such access.
REQUIRED
5.15.3 Confidential Information Identification
The solution MUST include techniques for identifying confidential or sensitive information. Sometimes confused with discovery, data identification is a process by which organizations use a data leakage prevention technology to determine what to look for.
Data is classified as either structured or unstructured. Structured data resides in fixed fields within a file such as a spreadsheet, while unstructured data refers to free-form text or media in text documents, PDF files and video etc.
REQUIRED
5.15.4 Ability to Implement Compliance Audits
The solution MUST provide the general ability to implement data loss and leakage prevention processes that drive compliance with PCI, HIPAA, GLBA, CIS, NIST and or similar European or African continental standards.
REQUIRED
Functional Requirement
Type (Must/
Should/
May)
5.15.1 Support for Standards Based Data at Rest Encryption
The solution (all components hosting sensitive data, personal data, credit card data or CUI) is required to be able to support the encryption of data at rest using standard strong encryption techniques such as ECC and RSA with certificate based PKI (i.e. X509 etc.). The certificates and PKI infrastructure used for this purpose MUST comply with the requirements stipulated in this document for digital identity.
The solution vendors for 3rd party components should articulate how each of the components supplied provides data encryption facilities for data at rest and the strength and benefits of their approach. This also applies to all components hosting data for all building blocks.
REQUIRED
5.15.2 Support for Standards Based Data in Transit Encryption
All of the internal connections between the various components in each building block and between building blocks as well as the external connections for API calls etc. between web and mobile applications and their respective services must be encrypted for data in transit using standard strong encryption techniques such as ECC and RSA with certificate based PKI (ie. X509 etc.). The certificates and PKI infrastructure used for this purpose MUST comply with the requirements stipulated in this document for digital identity.
The solution vendors for 3rd party components should articulate how each of the components supplied provides data encryption facilities for data in transit and the strength and benefits of their approach. This also applies to all components communicating data between all building blocks.
REQUIRED
Functional Requirement
Type (Must/
Should/
May)
5.16.1 Social Threat Mitigation
The project MUST mitigate these types of threats with a combination of policy, training and technology. Many of the attacks in this style are initiated through phishing and dangerous email attachments. It is therefore anticipated that much of the technical aspects of mitigating these types of attacks can be addressed through the requirements identified elsewhere in this document.
Cyber-criminals use a range of attack styles leveraging social networking, engineering and media to achieve a range of goals: for example to obtain personal data, hijack accounts, steal identities, initiate illegitimate payments, or convince the victim to proceed with any other activity against their self-interest, such as transferring money or sharing personal data”
The most frequent styles of attack include:
Phishing – Email or social media based social engineering attacks.
Thishing - Targeted phishing attacks… for example against senior management (whaling https://searchsecurity.techtarget.com/definition/whaling) and specific people/organisations (spear phishing https://searchsecurity.techtarget.com/definition/spear-phishing) have recently become popular forms of social engineering attack for cyber-criminals.
Vendors MUST, however, respond to this section of the requirements and articulate how their proposed solution explicitly protects from these styles of attacks and how other offerings are placed as part of their proposal for enablement etc. (including policy and training) collectively provide the required degree of protection and mitigation.
REQUIRED
5.17.1 Continuous Posture Assessment
The project MUST deliver continuous cloud security posture assessments of cloud environments through security and compliance teams. The solution must be able to manage the massive number of security posture management issues that will confront the project as infrastructure is deployed on public clouds with even modest deployments.
The solution MUST be able to provide an assessment approach which addresses all of the common security concerns accompanying public cloud based deployments using containers, kubernetes and other public cloud services etc..
The key requirements are both taking control of; and keeping control of security risks in multiple, ever-growing and ever-changing cloud environments. Typically it is just too much data to process, too often, with an ever-expanding attack surface as more applications and services are deployed. This is the problem space that MUST be dealt with by the proposed solution and why continuous cloud security posture management is an absolute MUST for cloud deployments.
Note that GovStack may be deployed either on cloud infrastructure, on premise infrastructure or both. The intent of the CSPM requirements is for public cloud deployments although it can also be utilized for controlling on-premise public cloud offerings such as Azure Arc or AWS Outposts..
REQUIRED
5.17.2 Inventory, Ownership and Customization
In terms of the inventory of data and services deployed, the following MUST be able to be managed (see ensuing rows)
REQUIRED
5.17.3 Inventory Collection Coverage
Must collect the metadata from all available cloud resources that have been deployed and utilized in the account. This collection MUST go beyond the core types of services (compute, networking, database, and IAM etc.) to incorporate a complete security metadata view of the utilized cloud resources and any potential exposures both inside and outside containers.
REQUIRED
5.17.4 Data Ownership
Solutions delivered as Software-as-a-Service (SaaS) provide quick onboarding, but they are typically managed by third-party with account-wide access to your cloud resources. This may present challenges with data ownership, compliance attestations of an external third party, and adherence to regional compliance mandates. These issues MUST be identified by CSPM.
REQUIRED
5.17.5 Customization
3rd party solutions can vary widely in their ability to make adjustments to how data is collected, how security checks are implemented, how results are filtered/excluded and reported, and how and when teams are alerted for issues. Without full control over all aspects, you may end up with results and notifications that cannot be properly tuned, and this can lead to ignoring the tool and overall alert fatigue. The CSPM solution MUST provide the ability to navigate the posture across all 3rd party components and alleviate alert fatigue.
REQUIRED
5.17.6 Answer Complex/Advanced Questions
Simple questions like “Is this S3 Bucket Public?” can usually be identified easily, but questions like “Are there any publicly accessible virtual machines with attached instance credentials that allow reading from S3 buckets tagged ‘sensitive’?”, “Which GKE Clusters have pods with access to escalate to ‘Project Owner’ via the attached Service Account? MUST be able to be answered by the CSPM solution”
REQUIRED
5.17.7 Provide Continuous Results and Triage
The CSPM solution MUST be designed with tunable results tracking and results triage workflows across multiple assessment intervals for continuous assessment as the total deployed solution evolves.
REQUIRED
5.17.8 Focus on both Hardening and Compliance
Most security practitioners are aware of the differences between an environment that is only compliant with one that is compliant and also well hardened. Compliance is a key driver for obtaining project funding and being able to operate a business legally, but going no further than compliance still leaves you open to critical risks. The CSPM tools MUST cover both security best practices and compliance objectives equally.
REQUIRED
5.17.9 Compliance Objectives Driven Approach
The solution MUST be able to associate controls with one or more compliance objectives. Views, filtering, and workflows should be driven by compliance objectives. For example, filtering controls by “PCI”, “NIST 800-53”, or “CIS” should narrow down the list to the controls that align with those frameworks with an identical mechanism that can also be used to filter controls for “lateral movement” or “privilege escalation”.
REQUIRED
5.17.10 CSPM Basics
The CSPM solution MUST address the basic cloud security posture management abilities as defined above for the broadest range of public cloud infrastructure providers (for example AWS, GCP and Azure).
The following basic activities MUST be supported:
Collect several types of public-cloud-specific configuration data on a one-time or recurring basis from any cloud account resources (VMs, Clusters, IAM, etc),
Parse and load the configuration data into a graph database with deep linked relationships between resources to support advanced querying capabilities,
Run a customizable series of policy checks to determine conformance and record passing/failing resources on a recurring basis on that configuration,
Create custom groupings of related policy checks aiding in tracking remediation efforts and an associated reduction in risk over time,
Provide notifications to multiple destinations (email, logs, instant message etc.) when specified deviations from desired baselines occur.
REQUIRED
5.19.1 Security Automation in General
In general the vulnerability scanning requirements are covered in other sections of this document. The focus of this requirement is automation of vulnerability management to limit the window of vulnerability to be as short as possible once a vulnerability becomes known. The project MUST implement security automation which means automated patching and upgrades of the software and firmware in and around the applications and networking infrastructure to ensure that patches addressing security vulnerabilities are deployed consistently across all of the infrastructure.
REQUIRED
5.19.2 Repeatability for both Cloud and On-Premise Deployments
The security automation solution MUST provide a clean, consistent, simple and repeatable means of configuration management, applications deployment and patching that is flexible enough to support the entire infrastructure including both on-premise and cloud based deployments. Typically this would be achieved with a playbook style approach where playbooks are built using a language such as YAML and applied consistently through an automated scripting approach.
REQUIRED
5.19.3 Agentless Orchestration and Provisioning
The solution MUST provide orchestration, provisioning, module, plugin and inventory management support in order to be comprehensive for enterprise needs. Ideally the solution provided MUST be agentless and not require any additional software to be deployed on each node in the network.
REQUIRED
5.19.4 Consistent Configuration Management
The solution MUST provide simple, reliable, and consistent configuration management capabilities. Must be able to be deployed quickly and simply with many out-of-the-box plugins for common technologies. Configurations MUST be simple data descriptions of infrastructure which are both readable by humans and parsable by machines.
The solution itself MUST be able to connect to remote nodes under administration via SSH (Secure Socket Shell) key for secure configuration. For example configurations MUST be able to be consistently applied to hosts via lists of IP addresses and apply playbooks to install the configuration update on all the nodes. Subsequently the playbook can be executed from a control machine to effect the update. Logs MUST be taken and reported on to ensure that centralized admin is aware of any failure and can address them manually if necessary
REQUIRED
5.19.5 Orchestrated Workflows
The offered solution MUST provide orchestration which involves bringing different elements together to run a whole operation. For example, with application deployment, you need to manage not just the front-end and backend services but the databases, networks and storage among other components. The orchestration solution MUST also ensure that all the tasks are handled in the proper order using automated workflows and provisioning etc.. Orchestations and playbooks MUST be reusable and repeatable tasks that can be applied time and again to the infrastructure based on parameters.
REQUIRED
5.19.6 Site-wide Security Policy Implementation
The solution MUST have the ability to implement sitewide security policies (such as firewall rules or locking down users) which can be implemented along with other automated processes. MUST be able to configure the security details on the control machine and run the associated playbook to automatically update the remote hosts with those details. This means that security compliance should be simplified and easily implemented. An admin’s user ID and password MUST NOT be retrievable in plain text.
REQUIRED
5.21.1 HIDS, SIM and SIEM with Real-Time Integrity Monitoring
The solution for intrusion prevention and detection MUST combine HIDS monitoring features with Security Incident Management (SIM)/Security Information and Event Management (SIEM) features. MUST also be able to perform real-time file integrity monitoring, Windows registry monitoring, rootkit detection, real-time alerting, and active response.
REQUIRED
5.21.2 Multi-Platform Support
The solution MUST be multi-platform and support deployment on Windows Server, and most modern Unix-like systems including Linux, FreeBSD, OpenBSD, and Solaris etc.. The reason for this is that it is yet to be determined exactly which O/S infrastructure will be required for the full GovStack solution and it will likely end up being a hybrid mix of various O/S platforms but predominantly Linux in containers.
REQUIRED
5.21.3 Centralized Management
The intrusion prevention and detection software MUST consist of a central manager for monitoring and receiving information from agents (most likely agents are required - they are small programs installed on the systems to be monitored. Of course an agentless solution will be acceptable if it is feasible). The central manager MUST include storage of the file integrity checking databases, logs, events, and system auditing entries.
REQUIRED
5.21.4 Basic IDS Features
The intrusion prevention and detection solution MUST minimally offer the following features:
Log-based Intrusion Detection
Rootkit Detection
Malware Detection
Active Response
Compliance Auditing
File Integrity Monitoring
System Inventory
Note that this is a very minimalist set of requirements and vendors may offer enterprise level open source subscriptions that offer more advanced features for intrusion prevention and detection which are AI/ML based for example. It is up to each vendor to articulate the value of what they are offering and why it is necessary.
REQUIRED
5.22.1 General OSINT Requirements
The project MUST provide OSINT tools with the following purposes and general requirements in focus (see ensuing rows):
REQUIRED
5.22.2 Discovery of Public-Facing Assets
The most common function of OSINT tools is helping IT teams discover public-facing assets and mapping what information each possesses that could contribute to a potential attack surface. This is public information about vulnerabilities in services and technologies that cyber-criminals can potentially use to gain access. In general, these tools don’t typically try to look for things like specific program vulnerabilities or perform penetration testing but may also incorporate these features.
The main purpose of OSINT tools is determining what information someone could publicly discover about the organization's assets without resorting to hacking and offer security professionals the opportunity to proactively address these vulnerabilities by reading the reports.
REQUIRED
5.22.3 Discover Relevant Information Outside the Organization
A secondary function that some OSINT tools perform is looking for relevant information outside of an organization, such as in social media posts or information posted at specific domains and locations that might be outside of a tightly defined network. Organizations which have acquired a lot of diverse IT assets are excellent candidates for OSINT tools and we see that this has huge potential for GovStack. Given the extreme growth and popularity of social media, looking outside the organization perimeter for sensitive information is helpful for just about any group.
REQUIRED
5.22.4 Collate Discovered Information into Actionable Form
Finally, some OSINT tools help to collate and group all the discovered information into useful and actionable intelligence. Running an OSINT scan for a large enterprise can yield hundreds of thousands of results, especially if both internal and external assets are included. Piecing all that data together and being able to deal with the most serious problems first can be extremely helpful. The solutions offered MAY also incorporate AI/ML and neural network based solutions for better discovery and automation of analytics etc.
The offered solution SHOULD deliver better alignment of the organization's ability to comply with standards such as PCIDSS, HIPAA and GDPR etc. Prospective vendors SHOULD articulate how their solution achieves this goal.
REQUIRED
5.22.5 Open Source Offering
The offered solution MUST be based fully on open source components. The vendor may offer subscriptions for support so long as the offered solution does not require those subscriptions in order to deliver the core functionality specified in this document.
REQUIRED
Functional Requirement
Type (Must/
Should/
May)
5.23.1 Identify Fraudulent Purchases and Transactions
The solution MUST provide the ability to identify potentially fraudulent purchases, transactions, or access. The solution MUST continuously examine user behavior, and analyse risk figures then identify any suspicious activity that may require intervention.
REQUIRED
5.23.2 General Features Required
The solution MUST be able to check the potential of fraudulent actions by users both internal and external, ensuring that transactions are lawful and genuine. The solution MUST also protect the sensitive information of both organizations and citizens (also see data leakage prevention). The solution MUST meet the following general features requirements:
AI/Deep Learning (Collaborative)
Analytics/Data Mining
Insider Threat Monitoring
Risk Assessment
Transaction Scoring
Intelligence Reporting
ID Analytics and Alerts
Real-Time Monitoring
Blacklisting
Investigator Notes
Transaction Approval
Custom Fraud Parameters
Unified Platform
Data Enrichment
Continuous Innovation
Visualize real-time data
Anomaly detection
Frictionless Commerce
Early Warning System
REQUIRED
5.23.3 Fraud Information Dashboard
The solution MUST provide an information dashboard with data visualization of statistical information and alerts pertaining to fraud prevention and detection that need to be addressed with an assigned priority.
REQUIRED
5.23.4 Data Import from Many Sources
The solution MUST provide the ability to import data sets from many applications and databases in order to perform analysis and provide fraud analysis and scoring information along with alerts on configurable events and thresholds.
REQUIRED
5.23.5 CRM Integration
The solution MUST provide integration with common customer relationship management solutions that are likely to be deployed as part and parcel of GovStack or perhaps already existing in the government's target architecture.
REQUIRED
5.23.6 API for Data Integration
The solution MUST provide an open API for the purposes of integrating data from unknown sources to be checked for fraudulent activity.
REQUIRED
5.23.7 AI/ML Based Capabilities
Ideally the solution SHOULD incorporate advanced AI and ML capabilities for the purposes of detecting fraud with a combination of neural network, pattern recognition and data mining approaches to identify potentially fraudulent events based on advanced models and historical learning.
OPTIONAL
5.23.8 Investigator Notes and Workflows
The solution MUST provide investigator notes and workflow capabilities for investigating potentially fraudulent incidents and managing those incidents through to resolution.
REQUIRED
5.24.1 General Incident Response Requirements
The solution MUST provide an incident response and management system that implements the following general requirements as a fully integrated solution (referring to the other sections of this document):
REQUIRED
5.24.2 Support for NIST and/OR SANS
In general the NIST and/or SANS incident response frameworks SHOULD be followed and the tooling MUST facilitate this.
REQUIRED
5.24.3 Assessment and Review Facilities
The solution MUST provide capabilities for reviewing and documenting existing security measures and policies to determine effectiveness as well as performing a risk assessment to determine which vulnerabilities currently exist and the priority of your assets in relation to them.
This Information is then applied to prioritizing responses for incident types and is also used to reconfigure systems to cover vulnerabilities and focus protection on high-priority assets. This phase is where you refine existing policies and procedures or write new ones if they are lacking. These procedures include documenting a communication plan and the assignment of roles and responsibilities during an incident.
REQUIRED
5.24.4 Tools to Support Detection and Identification
The solution MUST support the use of the tools and procedures determined in the preparation phase and define the process and teams to work on detecting and identifying any suspicious activity. When an incident is detected, team members need to work to identify the nature of the attack, its source, and the goals of the attacker.
During identification, any evidence collected MUST be protected and retained for later in-depth analysis. Responders MUST document all steps taken and evidence found, including all details. The purpose of this is to more effectively prosecute if and when an attacker is identified.
5.25.5 Support for Communications Planning
After an incident is confirmed, communication plans MUST be initiated by the system. These plans MUST inform all concerned parties by workflow (i.e. security board members, stakeholders, authorities, legal counsel, and eventually users of the incident) advising which steps MUST be taken.
REQUIRED
5.24.6 Threat Containment and Elimination
The solution and its processes MUST support the containment and elimination of threats. After an incident is identified, containment methods are determined and enacted.
Containment:
The goal is to advance to this stage as quickly as possible to minimize the amount of damage caused.
Containment MUST be able to be accomplished in sub-phases:
Short term containment—immediate threats are isolated in place. For example, the area of your network that an attacker is currently in may be segmented off. Or, a server that is infected may be taken offline and traffic redirected to a failover.
Long term containment—additional access controls are applied to unaffected systems. Meanwhile, clean, patched versions of systems and resources are created and prepared for the recovery phase.
Elimination:
During and after containment, the full extent of an attack MUST be made visible. Once teams are aware of all affected systems and resources, they can begin ejecting attackers and eliminating malware from systems. This phase continues until all traces of the attack are removed. In some cases, this may require taking systems off-line so assets can be replaced with clean versions in recovery. It is anticipated that security automation tools will play a key role in this phase and SHOULD be integrated with the solution for this purpose (see the section outlining security automation requirements)
REQUIRED
5.24.7 Recovery, Restoration and Refinement
The solution MUST support a recovery, restoration and refinement phase where recovery and restoration of damaged systems is achieved by bringing last-known-good versions online. Ideally, systems can be restored without loss of data but this isn’t always possible.
Teams MUST be able to determine when the last clean copy of data was created and restore from it. The recovery phase typically extends for a while as it also includes monitoring systems for a while after an incident to ensure that attackers don’t return.
Feedback and refinement MUST be enabled where lessons learned by your team's reviews along with the steps that were taken with a response. The team MUST be able to address what went well, what didn’t, and document a process for future improvements.
Note that this is not something that gets performed in isolation by the incident response system but an integrated process that coordinates these phases across all security systems to formulate and execute the response.
REQUIRED
5.25.1 Comprehensive Sandbox
The proposed solution MUST include a security sandbox environment with instances of the entire software stack and all of the security tools installed and configured such that security testing and scenarios can be addressed on an ongoing basis. This is not so much a solution requirement as a deployment requirement and MUST address all of the processes associated with each of the facets of security defined herein.
REQUIRED
5.25.2 Scalability
The security sandbox environment MUST scale to a level that is suitable for testing scenarios such as DDOS prevention etc. but does not have to be implemented at the same scale as the regular test environment or the production environment for example.
REQUIRED
5.25.3 Test Scripting and Automation
The project MUST conduct security testing using automated scripts as much as possible on an ongoing basis and the security team MUST take ownership of the ongoing securitization of all digital assets for each and every deployment. Note that the responsibility for these activities may ultimately be delegated to a government or country team in the case of build-X-transfer implementation scenarios. The role of the solution provider is to ensure that the baseline for these processes is established prior to any handover/transfer.
REQUIRED
5.26.1 General Recovery Requirements
The solution MUST cater for the recovery of all critical digital infrastructure in the case of major security incidents. Refer to the section above regarding the indecent response system for the general requirements regarding how such recovery MUST be performed.
REQUIRED
5.26.2 Backup for Code and Images
The backup and recovery systems for code, system images and data MUST cater for complete recovery of all critical digital infrastructure after natural disasters and also security incidents. The detailed requirements for such recovery of systems are to be provided by each building block as custodian of the code, images and data. Detailed information on the specific recovery requirements MUST be provided by the project for each GovStack implementation as they may vary widely due to constraints on budget and capabilities.
REQUIRED
0.1 | Laurence Berry, Betty Mwema | Add draft content for standards and patterns into the specification structure |
0.2 | Laurence Berry,Betty Mwema, Dr. P. S. Ramkumar, Valeria Tafoya | Restructure the specifications to work with the UI/UX working group needs |
Share research findings with team members, senior members or strategic leaders, and even the general public whenever practical.
The aim is not just to share information but also to generate dialogue and collaborative action based on the findings.
Organise Your Findings
Begin by grouping your user insights, key takeaways, and suggestions. This could be grouped by themes, user groups, and stages in the user journey.
Create a Simple Presentation
Document your findings. Each slide could represent a key finding or insight. Remember to use clear, simple language and include visual aids where possible to increase understanding.
Schedule a Playback Session
Invite team members and stakeholders to a meeting where you'll share your findings. Make time for discussion.
Document and Share
Share the presentation along with any additional notes from the discussion. This ensures that everyone has access to the information and can refer back to it in the future.
You might even consider publishing findings openly through a blog or similar format.
Start by understanding the needs and requirements of the solution, including users' needs, expectations, and pain points. Consider the "Person" and the "Role" are not the same. For example, the same person may use a health care application as a doctor and also as a patient but the needs of a doctor's UI/UX are different from that of a patient, and while the doctor may work on the data of multiple patients, a patient can access only self-data. You can find more examples of how to understand user needs in the implementation playbook.
Understanding user needs begins with user research. This includes techniques like surveys, interviews, user testing, and analysis of usage data. The goal is to understand the tasks users are trying to complete, the problems they face, and the goals and motivations of users in specific roles.
Always question assumptions about what users need in specific roles. Just because something is commonly done or seems like a good idea doesn't mean it is what users need. Validate every assumption with data.
Before jumping into solutions, make sure you have correctly framed the problem. Ask: What user need is this solving? Why is this a problem for our users? How do we know this?
Not all user needs are equally important. Use data from your research to prioritise features and improvements based on what users need most.
Once you're ready to go live, continuously monitor and evaluate the performance and usability of the service, and iterate the design accordingly to drive continuous improvement and optimise user experience.
At a basic level, all services should be tracking metrics such as:
User Satisfaction - Overall, how satisfied are users with your service? This can be measured through surveys, feedback forms, or by analysing user behaviour (e.g., how often they return to your service).
Task Completion Rate - What percentage of users successfully complete the tasks they set out to do on your service?
Error Rates - How often do users encounter errors or difficulties when using your service?
In addition to these basics, each service will likely have specific KPIs relevant to its unique goals and user tasks. Identify what these are and how you can measure them.
Run a session with your team to create a performance framework.
Use analytics tools to collect data on these KPIs. Google Analytics is a popular choice, but there are many other tools available. Make sure to set up your analytics tool to track the specific actions and events that correspond to your KPIs.
Implement a mechanism to collect user satisfaction data at the end of key user journeys. This could be a survey or a simple rating system.
See the pattern for collecting feedback
Do not wait until the end of the journey to collect feedback. Implement feedback mechanisms at key points throughout the user journey. This could be feedback forms on individual pages, live chat options, or proactive prompts asking users if they need help.
See the pattern for collecting feedback
Set a regular schedule to review your KPIs and user feedback. Use this data to identify issues and opportunities for improvement, and take action accordingly.
Find opportunities to collaborate closely with users, stakeholders, and other team members with diverse multidisciplinary skills, throughout the design process.
Empower users to take an active role in co-creating and co-designing public services.
Involve stakeholders early on to understand their expectations and objectives.
Get feedback from stakeholders to review and comment on design decisions and findings.
Conduct workshops or brainstorming sessions for diverse input.
Have peer reviews to get the perspective of different roles in the design.
Emphasise user needs and project goals within the team.
Carry out user research activities, like interviews and surveys, to understand user needs.
Hold co-design sessions with users, for them to participate directly in the design process.
Conduct usability testing with real users to identify issues and opportunities for improvement.
Collect user feedback post-launch.
Usability testing allows you to observe firsthand how users interact with your product. You can identify any challenges they encounter and understand the 'why' behind their behaviour. This improves user experience and can prevent costly redesigns later on.
Further reading and resources for usability testing include:
The cloud-based analytics solution, such as Google Analytics, is a straightforward and easy-to-setup method that offers robust data about your users' behavior. It's a beneficial choice for those who want to get started quickly and without much technical setup.
Setup: Begin by creating an account on the platform of your choice. You will then add the tracking code they provide to your website. Ensure this code is embedded on each page you wish to monitor.
Configuration: Within the platform, you'll set up goals or events that align with your Key Performance Indicators (KPIs). This allows the software to track specific user actions that are of interest to your business.
Monitoring: Once the setup is complete, you can start monitoring user behavior data through the platform's dashboard.
Remember to respect user privacy throughout this process, which involves informing users about the data you collect, why you collect it, and offering an option to opt out.
Self-hosted analytics, such as open-source platforms like Matomo or Plausible, offer more control over your data and are often favored by businesses that place a high emphasis on data privacy and security. This is particularly important if you value keeping data in-country due to regulatory or compliance requirements.
Setup: You'll need to set up these platforms on a server, which could be owned by you or rented from a hosting provider. After this, you'll add the platform-specific tracking code to your website.
Configuration: As with cloud-based solutions, you will need to define the events or goals that align with your KPIs.
Monitoring: Once configured, you can use the platform's dashboard to track user behavior and monitor your KPIs.
Server space considerations for self-hosting depend on several factors, including the amount of traffic your website receives and the level of detail in the data you are tracking. As a starting point, 1GB of space could handle over a million simple page views, but more complex tracking would reduce this. Consulting with a server or IT professional could provide a more accurate estimate based on your specific needs.
It is also possible to start with a cloud-based solution for quick setup and immediate insights, then transition to a self-hosted solution as it's being set up. This allows you to benefit from analytics data right away, while your more robust, privacy-centric solution is being prepared.
Any Building Block must follow the security requirements that are outlined in the Cross-Cutting Requirements section of this document. In addition, specific security related functionality should be provided in any GovStack implementation. An API Gateway may be provided to manage communication between building blocks as well as any incoming requests from external systems. The API Gateway will work with the Information Mediator Building Block and provide additional controls and security while coordinating API calls between the application and various Building Blocks.
Note: The API Gateway functionality is connected to the GovStack Adaptor concept which is described in Section 6 (Onboarding Products) of the Non-Functional Requirements document
Second, some type of Identity and Access Management or Role-Based Access Control should be implemented as part of a GovStack solution, as desribed in Section 7.4 of this Security specification.
The functional requirements for both of these components are described below.
Although API Management and Gateway services are an architectural element, this section of the document also describes the detailed functional requirements for implementing API management, governance and gateway services for GovStack. Explicitly, the communications between all building blocks (BB’s) and applications shall be via open API based access.
The goal of this endeavor is to address primary security concerns centrally and create a consistent way of implementing a modern cloud-ready architecture to publish API’s to 3rd parties (both internal and external), govern and manage the access to API’s both internally and externally by policy and create centralized and secure point of access to each and every API endpoint exposed through GovStack. These functional requirements do not define specific APIs (API’s themselves are implemented by other building blocks) - these functional requirements only define the functionality that must be implemented within the bounds of the security building block and how it needs to be applied to other building blocks.
8.1.1 Multiple API Gateway (REQUIRED)
The ability to implement segregated gateways for both internal and external API traffic. This means that the internal API driven integration traffic and the external API access traffic is to be segregated into separate gateway infrastructure components and access controlled by networks and network access policy.
8.1.2 Standards Based Identity and Access (REQUIRED)
The ability to implement standardized authentication, authorization and encryption protocols including federated identity (OAuth2, OpenID Connect, SAML2, SSO, SSL, TLS etc.). These standards MUST be supported and incorporated into any chosen API Management product out-of-the-box for controlling access to API interfaces.
Note that the native API interfaces for each building block's components may be implemented in clear text with no authentication and/or encryption etc. so long as the access to these interfaces is firewalled by network and network policy such that it is ONLY accessible through the API Gateway. This is intended to simplify, standardize and expedite API development, deployment and management.
Note that the APi interface specifications for the APi gateway and API management services are based on open standards based identity and access which is in turn based on OpenAPI 3.0. The following link describes how OAuth2 is used in OpenAPI 3.0 standards based identity and access: https://swagger.io/docs/specification/authentication/oauth2/. It is this style of standards based identity and access that is required to be supported, Note that additional API interfaces may be exposed by the API Management and Gateway solution but they would predominantly be targeted for incorporation into the DevOps CI/CD tool chain not exposed to other BBs.
8.1.3 Identity Store Plugins (REQUIRED)
The ability to utilize separate identity stores as repositories for identity and perform proxied authentication to such repositories (for example LDAP) using multiple credentials including digital identity certificates.
8.1.4 API Protection Features (REQUIRED)
The ability to support many security and protection features such as Standard API keys, App-ID key pair, IP address filtering, Referrer domain filtering, Message encryption, Rule-based routing, Payload security, Channel security, Defense against common XML and JSON attacks, Low- to no-code security configuration, PCI compliance.
Note that this is not an exhaustive list and additional policy protection features may strengthen the value of any given solution.
8.1.5 Centralized API Policy Based Access (REQUIRED)
The ability to implement policy based access management for API endpoints. The policy MUST be able to be implemented centrally then applied across multiple gateways and all API endpoints.
8.1.6 API Endpoint Transformation (REQUIRED)
The ability to support multiple standards for proxying endpoints and exposing them as standard OpenAPI endpoints (see Ref 1) including transformations such as XML ↔ JSON and SOAP ↔ REST
Note that this is not an exhaustive list of the required transformations and that additional transformations may reinforce the strength of a solution’s flexibility and adaptability.
8.1.7 Alternative API Protocols (REQUIRED)
The ability to support multiple common protocols such as JMS, WS, MQTT.
Note that this is not an exhaustive list and that additional alternative protocols supported may strengthen the value of the proposed solution.
8.1.8 API Versioning and Lifecycle (REQUIRED)
The ability to support multiple API versions and control multiple API versions and API lifecycle management.
Note that there are many potential features available to strengthen the solution proposition such as version dependency management and deployment rollback etc.
8.1.9 API Call Traffic Shaping (RECOMMENDED)
The ability to implement traffic transformation and traffic shaping. This is typically implemented as single/dual rate, private (per node) and shared (by multiple nodes) shapers.
Note that additional traffic shaping capabilities may strengthen the solution proposition.
8.1.10 API Call Rate Limiting (REQUIRED)
The ability to implement rate limiting for API calls on an API-by-API basis. This limits the rate at which API’s can be called by a consumer (for example 100/second etc.) and usually has many flexible options and caters for policy driven rate limits based on busy times etc. Principally it comes down to the offered SLA.
Support for complex SLA construction may strengthen the solution proposition.
8.1.11 API Call Quotas (RECOMMENDED)
The ability to implement quotas for API calls from specific clients (daily, weekly, monthly etc.). This restricts the number of API calls a client can make and also often includes flexible options so that a complex SLA can be constructed.
Support for complex SLA construction may strengthen the solution proposition.
8.1.12 API Call Logging, Monitoring and Alerts (REQUIRED)
The ability to implement logging and monitoring of API calls with reporting and administrative alerts. Typically extensive functionality with multiple logging levels is implemented.
Support for flexible levels of logging (for example debug, trace etc.) may strengthen the solution proposition.
8.1.13 API Call Analytics (RECOMMENDED)
The ability to implement advanced analytics with out-of-the-box charts and reporting on demand along with the ability to trigger alerts based on analytics.
The strength, flexibility, feature set and appeal of the analytics and charting capabilities may strengthen the solution proposition. Note that this can optionally be implemented as a separate tooling layer perhaps using tools such as Prometheus and Grafana etc.
8.1.14 API Virtualization (RECOMMENDED)
The ability to implement virtualized API endpoints. API virtualization is the process of using a tool that creates a virtual copy of your API, which mirrors all of the specifications of your production API, and using this virtual copy in place of your production API for testing.
Note that this is NOT API mocking but provides an actual endpoint for solution testing to proceed unhindered.
8.1.15 API Developer Portal (RECOMMENDED)
The ability to publish API specifications through a developer portal using open standards. Includes features such as portal availability across deployment types (on-premise, cloud, etc.), interactive API documentation, developer metrics, developer portal templates, portal customization (HTML, CSS etc.), ability to withdraw developer keys, either temporarily or permanently.
Note that advanced developer portal features may strengthen the value of the solution proposition.
8.1.16 Flexibility in API Deployment Architectures (REQUIRED)
The ability to support a diverse array of deployment architectures including standalone, on-premise cloud and public cloud models including the ability to support a fully integrated microservices architecture based on containers. The architecture should also allow for the separation of key components and interfaces for meeting complex network security needs.
Note that the more flexible the deployment architectures are, the stronger the solution proposition.
8.1.17 Advanced DevOps Artifact Deployment (RECOMMENDED)
The ability to support advanced DevOps deployment techniques such as “canary deployment”, “blue-green deployment” and “AB testing”.
Additional advanced deployment scenarios and innovations will strengthen the value proposition of the proposed solution.
8.1.18 File Storage Integration (RECOMMENDED)
The ability to implement file storage platform integration such as S3 etc. - not exhaustive.
Note that flexibility in the support for storage integration across additional file storage standards will strengthen the value of the solution proposition.
8.1.19 API Monetization (OPTIONAL)
The ability to monetize API calls by specific 3rd parties or partners for the purposes of simply raising capital or creating partnership incentives through 3rd party portals. Features such as billing support, support for multiple models of revenue generation, low- to no-code monetization configuration, third-party payment system integration, prepay and/or postpay invoicing, multi-currency support and tax compliance are negotiable.
Note that the more advanced the monetization features are, the higher the solution value proposition is.
8.1.20 High Availability (REQUIRED)
The solution provided and any associated components MUST be highly available and utilize clustering technology in order to provide a minimum of 24x7x365 service with 99.99% availability (AKA 4 9’s).
8.1.21 Open Source Based (RECOMMENDED)
The offered solution MUST be based fully on open source components. The vendor may offer subscriptions for support so long as the offered solution does not require those subscriptions in order to deliver the core functionality specified in this document.
8.2.1 Identity Lifecycle Management
The following diagram defines the basic lifecycle required for managing identities:
What is required is a flexible and comprehensive user lifecycle management solution which provides the following generalized features and functions:
8.2.1 User Administration Tools (REQUIRED)
These are required so that administrative users and/or help desk users can centrally manage all user profiles and records. These features include:
Unlock accounts and reset passwords when needed
Changing the user status
Session management – visibility to see active users and to kill their session if needed
Workflow - monitor all workflows and terminate them if necessary
Review audit logs for authentication, access and changes to identity records.
Manage user profile attributes including credentials etc.
Manage user access rights for resources such as documents and APIs.
8.2.2 Multi-Source Identity Integration (REQUIRED)
Integration with multiple source identity systems (such as the Identity BB, databases and LDAP) to automatically initiate provisioning/deprovisioning activities related to enrollment, policy changes and un-enrollment processes.
8.2.3 Multi-Source Identity Synchronization (REQUIRED)
Multi-source synchronization of user identity data. This allows the system to integrate and synchronize with an authoritative source such as the national ID, social security system or perhaps more likely the Identity BB via an API/plugin approach. This is required for both initial load of identities and for ongoing provisioning, re-provisioning and deprovisioning etc..
The synchronization must determine which systems a user should be provisioned to/de-provisioned from, which permissions should be set or revoked for the applications that a person is entitled to and whether or not provisioning should be automatic or if an additional workflow should be triggered for human or other automated processing.
8.2.4 Identity Reconciliation (REQUIRED)
Automatic user identity record reconciliation. This is where synchronization is used to detect changes in the source system identity data and reconciliation is used to detect, compare and resolve the changes.
For example a user record has been added to a target system manually instead of using IAM.In this case reconciliation can be configured to either:
Add the user to the IAM system
Remove the user from the target system or;
Do nothing but flag the anomaly to an administrator
8.2.5 Self Service Portal and Workflow (REQUIRED)
A customizable, workflow driven, self-service user interface portal to enable administrators to create and manage policies, users and the various artefacts. Must support approval workflows to multiple stakeholders.
8.2.6 Advanced Password Management (REQUIRED)
The advanced password management capabilities must include the following:
Self-service Password Reset (SSPR)
Administrative change password
Password synchronization with directory and other resources.
Must work across both cloud and on-premise applications lowered costs and improved security for hybrid cloud deployments
Ability to enforce strong password policies
Self-service password which allows users to reset their own password without going to the help desk. Reset is via a combination of challenge/response questions, one-time link via registered email, one-time token via SMS or mobile device messaging and one-time PIN.
Password aging with password change reminders sent via email and potentially mobile messaging.
Password change synchronization across all target systems.
8.2.7 User Access Request Management (REQUIRED)
Whilst provisioning processes can be triggered through the automated user lifecycle management functionality, the IAM solution must also provide a self-service feature through the portal via which end-users can request access. This access-request functionality is provided through a shopping cart and service catalog design where the user has access to select the services to which they would like to request access. Applications and application-specific entitlements such as membership to an LDAP group or a role on a public cloud provider such as AWS must be supported as a part of the access grant process.
The access request feature must also include the optional selection of a “Profile Role” capability – which is a role defined for a job/position that can grant access to a number of applications that are needed for a particular job role (otherwise known as delegation).
The access request/approval workflows provided must support:
Multiple approvers – Must be able to define as many approval steps needed and select common targets such as a supervisor, object owner or admin, and group of approvers.
Service Level Agreements (SLA) - Must ensure that tasks are completed in a timely manner so if they are not, then they can be escalated to the appropriate person after expiry time.
The access request approval functionality must also support basic “delegated approval” and “out of office” functionality so that there are no barriers to self service access requests when the usual approvers are not available.
8.2.8 REST API (REQUIRED)
A REST (representational state transfer) based API for external integration of the provisioning and deprovisioning of users etc.. is required. It must support the same OpenAPI standards defined by the Architecture Blueprint and Functional Requirements (see Ref 1). An example of such a REST API is provided with the OpenIAM suite specification here: https://www.openiam.com/products/identity-governance/features/api/. This is intended as an example of the API style that is expected in such a suite. This is not definitive and may vary based on the proposed IAM suite.
8.2.9 Orphan Management (REQUIRED)
Organizations which are not actively using an IAM platform often have orphaned user records in their business applications. These are those records which are the result of users being given access and not having that access revoked when a person is unenrolled from a service.
Orphan management functionality consolidates all the orphaned records and provides administrators with tools to either clean up these records or link them to the correct user.
8.2.10 Access Certification (REQUIRED)
Regulatory requirements, such as GDPR, HIPPA and SOX combined with an increased focus on security are causing both public and private organizations to implement access certification policies. Scheduled access certification campaigns aid in complying with these regulatory mandates as well as improve security by guarding against the access violations which lead to security breaches.
However, when performed manually, these activities can be error prone and very time consuming for most mid to large organizations. The lack of consistency resulting from manual processes results in failed compliance audits and threats resulting from unauthorized access can slip undetected. The IAM solution must provide the ability to automate the access certification process which addresses the challenges found when performing these processes manually.
The following types of certifications must be supported by the IAM solution:
User Access Certification
Application Access Certification
Group Access Certification
These campaigns can be scheduled and run at regular intervals or they can be run on demand. The Access Certification functionality in the IAM solution must provide organizations with the following capabilities:
Human Friendly Reviews: End-users (reviewers) using the IAM solution access certification functionality must be able to perform their activities in a familiar self-service user interface. Reviewers must be able to review all the historical access in a central location as well as use tools to compare access (date, time, service etc.) between individuals.
Closed Loop Revocations: During the certification process, reviewers must be able to revoke accounts and entitlements with a simple one-click mechanism. The closed-loop validation mechanism will then ensure that revoked access has been deprovisioned from the target applications.
Support for Cloud, On-Premise and Hybrid Cloud: An increasing number of organizations today have hybrid environments where applications are deployed both on-premise and in the cloud. The IAM solution must provide a central identity governance platform such that the same consistent certification programs can be undertaken irrespective of the applications and infrastructure location.
Enrollment (Joiner)
Role Change (Mover, Additional Services)
Unenrollment (Leaver)
Status change
Access request (Additional Services)
Group creation (with group policy)
Self-registration (with a customized external workflow to integrate with the Identity BB)
Access Certification
The following predefined workflow templates are required to be provided:
Multiple approvers
Service Level Agreements (SLAs) to ensure timely completion (with automatic delegation and temporary changes in the new approvers access rights to enable time-sensitive approval)
Each of these workflows must support:
The IAM solution identity governance feature set must provide the creation of workflows to support complex processing, integration and approval steps. While custom workflows must be defined, the IAM solution must also provide default out-of-the-box workflow templates for common operations to simplify the configuration effort.
8.2.11 Workflow Creation (REQUIRED)
The IAM solution identity governance feature set must provide the creation of workflows to support complex processing, integration and approval steps. While custom workflows must be defined, the IAM solution must also provide default out-of-the-box workflow templates for common operations to simplify the configuration effort.
Each of these workflows must support:
Multiple approvers
Service Level Agreements (SLAs) to ensure timely completion (with automatic delegation and temporary changes in the new approvers access rights to enable time-sensitive approval)
The following predefined workflow templates are required to be provided:
Enrollment (Joiner)
Role Change (Mover, Additional Services)
Unenrollment (Leaver)
Status change
Access request (Additional Services)
Group creation (with group policy)
Self-registration (with a customized external workflow to integrate with the Identity BB)
Access Certification
8.2.12 Custom Workflows (RECOMMENDED)
Whilst the IAM solution must provide the above workflow templates, it also must include a BPMN compliant workflow engine that can be used to create new custom workflows. A graphical process designer such as the BPMN designer plugin such as one of the many available for the Eclipse IDE or similar must be included to simplify the effort required to create new custom workflows.
8.2.13 Audit and Compliance (REQUIRED)
Facilitating compliance with regulatory requirements or internal security policies is one of the principal drivers for Identity Governance. The IAM solution must provide tools to help organizations meet compliance mandates.
Organizations deploying the IAM solution as a part of GovStack are required to automate a variety of operations that sometimes utilize workflow-based approval steps.
Detailed audit logs must be associated with each of these operations so that organizations can answer the following fundamental questions:
What access rights does a user currently have?
When were they granted these rights?
How or why were they granted these rights and by whom were they granted?
In addition to access information, the IAM audit logs must also track details related to authentications, password changes, life cycle status changes, system configuration changes etc.. Using the information provided by these logs, combined with the out of the box reports, and self-service tools, organizations must be able to achieve the following:
Provide the auditors with clear evidence of compliance or otherwise.
The ability to proactively review, detect and revoke inappropriate access from any user.
The ability to review all access rights before granting any additional access rights.
A centralized review and certification of applications/systems access rights across both on-premise and cloud.
8.2.14 Connectors (RECOMMENDED)
The IAM solution should provide a large range of connectors for both source and target applications out-of-the-box, The reason for this is that GovStack will be deployed in more than one country and in unknown applications environments and must be able to be integrated with many different common enterprise information systems and services both on premise and in the cloud in a simple and cost effective manner… for example:
Microsoft (Active Directory, PowerShell, Windows, Azure AD, Exchange, SQL Server, Office 365, Azure DevOps, Dynamics365)
Oracle (eBusiness Suite (EBS), IDCS, database)
ERP/HR Systems (Oracle, SAP, ADP, Workday)
Public Cloud (Amazon Web Services, Azure, Google Cloud)
Infrastructure (Database/JDBC, GIT, LDAP (OpenLDAP, eDirectory, Active Directory, ApacheDS etc.), Linux (RHEL, CentOS, Ubuntu), REST web services, SCIM 2, scripts)
Others (Google Gsuite, Salesforce, SAP Hana, Slack (SCIM), Tableau… not an exhaustive list)
8.2.15 Access Management (REQUIRED)
Access management is an integral part of the required IAM solution. The Access Manager must provide a scalable, secure and consistent solution to implement policy based access for applications in hybrid cloud environments for both internal (employees), citizens (external) and 3rd parties (external) alike.
The Access Manager must provide the following tools to enable these objectives:
Web SSO. Web single-sign-on must be provided with support for SAML 2, oAuth 2, OpenIDConnect, OIDC, and a proxy to allow SSO to legacy applications. This enables web based applications to be easily configured for SSO without modification.
Adaptive Authentication. An adaptive authentication system as follows must be provided with the following features:
Password-based authentication
Certificate-based authentication
MFA-SMS/E-mail/Mobile app-based OTP
Adaptive Authentication builds on these options to provide a robust framework where users can build rich authentication workflows using a browser-based drag-and-drop interface.
The flows can take into account a broad range of risk factors including device, context, user choices, geolocation, profile attributes, user behavior and foundational identity systems.
This allows the implementation of a solution which offers a significantly higher level of security while providing an improved end-user experience in comparison to traditional options.
Multi Factor Authentication (MFA). While the IAM solution framework must allow the use of third party MFA products, it should also provide its own MFA solution which is pre-integrated and ready to use. The following MFA options should be provided out-of-the-box:
SMS-based OTP
E-mail-based OTP
Mobile app (iOS or Android) OTP plus push notification support
Social Sign-on (as opposed to single-sign-on). The Access Manager should allow social sign-on from social identity providers such as Google, Facebook and LinkedIn. Social registration significantly reduces the registration effort by allowing select attributes to be dynamically transferred from the social provider. This may or may not be used in practice but is a desired feature.
8.2.16 RBAC Based Authorization (REQUIRED)
The IAM solution must provide a flexible RBAC-based authorization model to enforce security into applications through the Access Manager. The RBAC model must support inheritance as well as direct entitlements and provide the flexibility needed to implement complex real world requirements. The authorization service must be able to be used in conjunction with oAuth2 and the Access Gateway to enforce the authorization rules.
8.2.17 Session Management (REQUIRED)
The IAM solution must provide session management for issues like session timeout to reduce the exposure created by long running sessions. This includes API’s to extend expiring tokens etc. for application and user convenience.
8.2.18 Device Registration (REQUIRED)
The IAM solution must provide device registration such that only registered devices can be used to access services by policy.
8.2.19 Fine Grained Audit Logging (REQUIRED)
The IAM solution must provide fine grained audit logging by the Access Manager so that the explicit date, time, user and service access is logged.
8.2.20 Access Gateway (REQUIRED)
An access gateway is required in order to provide protected proxy gateway access to the web through reverse and front side web infrastructures such as Apache and Nginx web servers. This must provide the following functionality:
SSO to legacy applications
Session management
Protection of APIs and application URLs by enforcing authentication and authorization rules unless a 3rd party API Management and Gateway suite is used in which case the access gateway must be configurable to utilize the 3rd party API Gateway.
8.2.21 Legacy SOA Security Features (REQUIRED)
The IAM suite should be able to implement a pure legacy SOA approach. A legacy SOA API with all required operations should be available to facilitate integrations with legacy SOAP/SOA systems. The IAM solution should provide SOA federation for controlling access to services in a legacy SOA environment using SAML, SAML 2 and WS-Security. The IAM solution must be able to enforce policies throughout SOA based services. RBAC and XACML support must be provided to allow the IAM solution to implement a flexible security model that supports the following:
Distributed services (vs monolithic applications)
Services distributed across organizational boundaries
Service Interoperability
Integration of disparate legacy SOA protocols
8.2.22 Web Access Management (REQUIRED)
The Access Gateway must be able to provide coarse-grained authorization when protecting web applications in totality. Requests must be routed through a proxy, which applies authorization rules, and forwards the request to the underlying servers, providing the application.
The model must be simple to deploy and easy to maintain. User identity must be checked and propagated through HTML header injections, HTTP query strings or HTTPS authentication headers to applications hidden behind a proxy server for the purposes of openness and compatibility. The native URL of these applications must be hidden from the public view (i.e. it is only exposed as a service name in a secure manner).
8.2.23 Single Sign On (SSO) (REQUIRED)
Each partner application, as well as internal applications and building blocks, may have their own set of security credentials and various authentication methods. Such applications may move in and out of security domains.
The user experience suffers when many login credentials must be remembered. Therefore the IAM solution must provide SSO features that allow users login once and roam unchallenged through a security realm to which they have been granted access.
This reduces the burden of many passwords and eliminates the need to individually login to each application. Users must be able to login once, and roam freely across secured domains without being challenged again. Participating security domains must never be required to give up their own credentials.
The ability to hold multiple identities, each with their own roles, permissions, access-levels and entitlements across multiple domains is required and allows for a wide network of co-operating domains to communicate seamlessly.
Authenticated subjects must be able to access restricted resources requiring multiple logins and credentials without the need to login at each domain. The IAM solutions access manager solution must not be based on a proprietary cookie. It should be based on SAML 2, which is a well-accepted industry standard for SSO.
Using SAML2 allows the IAM solutions access manager to not only provide SSO capability at the web application tier, but also across other layers such as Web Services in a completely unified way. SSO must also allow the access manager to integrate easily with existing authentication technologies that are deployed in any organization.
8.2.24 Federation (REQUIRED)
When GovStack is deployed it will need to be deployed with partners, suppliers and other organizations. For them to collaborate effectively, identity information needs to be propagated. The IAM solution must be able to manage the processes for federating users when a partner site comes on board or leaves. Federation capabilities must be provided by the IAM access manager solution. New cost recovery streams may be generated for GovStack users through enablement of trusted partnerships where authentication and authorization is carried out over federated business networks.
Federation refers to interoperation between entities in different security domains, either in different organizations, or in different tiers in the same organization.
A trust relationship must exist between the involved entities to federate identity and enable authentication across realms. Each domain may rely on different technologies and mechanisms to authenticate and authorize.
Federation enables loose coupling at the IDM level separating the way each organization/application/module/building block does its own security implementation while they adopt a common mechanism to propagate identity.
8.2.25 Security Token Service (STS) (REQUIRED)
STS is a system role defined by the WS-Trust specification. A Web Service Client interacts with the STS to request a security token for use in SOAP messages. In addition, a Web Service Provider interacts with an STS to validate security tokens that arrive in a SOAP message. An STS arbitrates between different security token formats.
The token transformation capability defined in WS-Trust provides a standards-based solution to bridge incompatible federation deployments or web services applications. Web service providers should not be required to support multiple authentication mechanisms even though they have to work with different web service clients.
The SAML standard is well recognized and the IAM solution must provide a Security Token Service that can validate SAML and SAML2 tokens to bridge different web services.
8.2.26 Role and Attribute Based Access Control (RBAC/ABAC) (REQUIRED)
The IAM solutions Access Manager must manage Groups, Roles, Permissions and Resources supporting both RBAC and ABAC. Groups are generally used to model organizational structure whereas Roles are used to model a person’s function within the enterprise. In RBAC, a subject is given one or more roles depending on the subject’s job.
Access is determined by the subject’s role. In ABAC (Attribute Based Access Control), access is determined by the attributes of the subject (person or entity), attributes of the resource being accessed, environmental attributes and the desired action attribute. ABAC is implemented based on the XACML specification with:
Coarse-grained access control - based on subject, role and permissions
Ease of administration - roles created for job functions
A subject that must be assigned to a role and execute actions that are authorized for the role
Assigned permissions for job functions based on operations rather than to resource objects
Enablement of the creation of:
Relationships between Users, Groups, Roles, Resources
Creation and enforcement of policies
Developing an access control strategy based on RBAC provides a clean and flexible model that is easy to maintain over a long period of time.
Policies may be associated with a person’s role. For example, someone in a medical advisor role may be permitted to access applications pertinent to his or her role, but not permitted to access applications related to someone in a doctor's role.
This section describes external APIs that must be implemented by the building block. Additional APIs may be implemented by the building block (all APIs must adhere to the standards and protocols defined), but the listed APIs define a minimal set that must be provided by any implementation.
All APIs will be defined using the OpenAPI (Swagger) standard. The API definitions will be hosted outside of this document. This section may provide a brief description of required APIs. This section will primarily contain links to the GitHub repository for OpenAPI definition (yaml) files as well as to a website hosted by GovStack that provides a live API documentation portal. The basic assumption here is that the IAM suite will be acquired and not built. The suite MUST supply an appropriate API with documented endpoints.
IAM Suite API: An example of such a REST API is provided with the OpenIAM suite specification here: https://www.openiam.com/products/identity-governance/features/api/. This is intended as an example of the API style that is expected in such a suite. This is not definitive and may vary based on the proposed IAM suite. The detailed API documentation for OpenIAM including interface specifications can be found here: https://docs.openiam.com/docs-4.1.14/html/API/index.htm
OAuth2 API: The following link describes how OAuth2 is used in OpenAPI 3.0 standards based identity and access: https://swagger.io/docs/specification/authentication/oauth2/.
SCEP API: A description of the OpenXPKI enrollment workflow and API can be found here: https://openxpki.readthedocs.io/en/latest/reference/configuration/workflows/enroll.html. This is an example of how an API for enrolment with its associated workflow should be implemented within the Certificate Authority Server.
LDAP API: A description of the standard REST LDAP API provided by the open source 389 Directory Server here: https://directory.fedoraproject.org/docs/389ds/design/ldap-rest-api.html#ldap-rest-api . This is an example of how an open API for credential storage and retrieval using LDAP should be implemented.
A workflow provides a detailed view of how this building block will interact with other building blocks to support common use cases.
This section lists workflows that this building block must support. Other workflows may be implemented in addition to those listed.
No specific workflow definitions are required for this building block as they will be inherited by the tools/products chosen to address each security issue/concern and fundamentally deal with the other building blocks API’s in a cross-cutting manner as in the case of API Management.
Other components of the Security BB such as the IAM Suite will also provide their own workflow tools but the details of the actual workflow need to be designed once more is known. An example of the type of default workflow provided for an IAM suite would be the basic workflow for provisioning new accounts which can be leveraged by the other BBs. This is typically a basic workflow built in a tool that can be customized to meet specific provisioning needs (perhaps with multiple administrative roles and connections to multiple external systems and modules etc.) . The following link describes an example the basic workflow for provisioning provided by Open IAM (one of the alternatives) https://www.openiam.com/products/identity-governance/features/provisioning/
Note that each building block is responsible for the defining base configurations and workflows required to be created in the IAM system that enable identity and access to be provisioned. Essentially the IAM system needs to be augmented with adapters to enable identity and access to be provisioned to target systems and resources in the way that the target applications implement their own identity and access. Alternatively where applications are built from the ground up they can leverage the IAM suite sAPI services to implement authentication and access.
These specific workflows and adapters can be defined at the detailed design stage and communicated to the Security WG for implementation in the IAM solution. It must be noted that the security WG is NOT responsible for determining the identity and access policy and the details of access for each role for example. The following need to be identified by each building block and communicated to the Security WG for implementation in the IAM suite configuration build:
Resource Types: for example files, services, API’s, applications, modules to be protected by IAM.
Resources: The definitions of the actual resources and their type provided by and required by each BB that are to be secured through IAM. This must include the target system or component that hosts these resources so that the correct provisioning adapter can be configured for that resource. Note that where the BB or resource has its own identity and access scheme an adapter can be written using the IAM suite API.
Roles: the roles of users of each system that include the required resource access for that role in terms of the Resources. Each BB must account for the access they require to be provisioned to other services as a part of their process scope. In the case of provisioning a new account there is a need for a broader workflow process that is outside the scope of the Security BB which incorporates the Identity and Registration BB. For example a Doctor role may require verification and validation of certain aspects of identity prior to provisioning access to a specific service. A basic set of sequence diagrams is provided below to reinforce the understanding of what is required.
Approval Workflows: workflow for the approval of the various identity and access requests (complete with approval roles etc).
The following sequence diagrams depict the basic means by which authentication and access control shall be implemented across building blocks. Note that by definition an account with no access can be created by any user. via self-registration. Self-registration using basic phone/email as well as strong self registration using foundational ID are to be supported . A sequence detailing technical authentication is also included below along with sequences defining the remaining aspects of the identity lifecycle (such as provisioning and deprovisioning of access) to be supported by the IAM suite and how they will be leveraged by all building blocks. Such workflows can be articulated in full during the detailed design phase:
This assumes the user already has an account. Authentication credentials are username or phone number and password.
The auth token could be signed with an expiration (JWT) which might allow the BB to perform the validation themselves. Additionally, if the token contains roles and/or a user ID and isn’t expired the BB could potentially rely on those.
For a specification, see the Identity and Verification Building Block Specification.
Note that role creation (e.g. farmers, doctors) is handled by the IAM solution, via either an administration UI or an API. Building blocks can create roles via the API to provision new roles.
It’s assumed the user clicked a link to access the service in the Building block UI.
The users are authorized with a valid access token for their email or phone number.
This flow assumes the user has an account and is currently authenticated.
This flow assumes the user has an account and is currently authenticated.
The workflows MUST adhere to all standards defined in this document as well as in the GovStack architecture document (link to appropriate section in architecture document)
No specific standard workflows are required in the context of the security building block. All workflows involving identity and access for example are the responsibility of each building block working group and to be defined during detailed design. Such workflows can and should be implemented using the workflow engine built into the IAM suite and perhaps extended from or to a meta-workflow defined in the context of another BB.
The security building block predominantly deals with the cross-cutting security concerns of each other building block and defines the basis for the implementation of solutions to address these concerns. The only interaction required is for API Management and Gateway services. These interactions are depicted and documented in Architecture Blueprint and Functional Requirements (see Ref 1).
The sequence diagrams below depict examples of how a building block might interact with the API Management and Gateway solution. This is only relevant for the API Management and Gateway services in the context of the security building block. A higher level sequence diagram depicting API interactions for building blocks is depicted and documented in Architecture Blueprint and Functional Requirements (see Ref 1).
Build services that are usable and equitable for all.
Accessibility is important because it allows services to give equal access to all citizens, comply with legal requirements, foster inclusivity, enhance user experience, avoid discrimination, and contribute to government credibility, cost savings, innovation, and international reputation.
There could be significant interdependencies between UI components for different accessibility requirements. The World Wide Web Consortium and Web Accessibility Initiative have developed standards for considering the needs of developers (), authoring tools, accessibility evaluation tools, and guidelines on how to make user agents (browsers, browser extensions, media players, readers) accessible to users.
Ensure your user testing and feedback collection includes a diverse range of users. Include users with disabilities and those from various backgrounds and experiences. This approach helps identify potential accessibility and inclusivity issues that might be overlooked by individuals without these experiences.
To create a service that caters to all users, you must understand their unique capacities. Consider factors like:
Time available
Financial situation
Ease of access to an interface (device or person)
Interface capability and confidence
Service process-related confidence
Awareness of the service, its purpose, and access options
Ability to comprehend information
Mental health/emotional capacity
Trust in service robustness, security, reliability
Ability to provide required information
Willingness to use the service, at all or in the most cost-effective way
With this understanding, identify the potential barriers users might face when accessing and using the service. These barriers could be,
Physical: Disabilities that affect a person's ability to interact with interfaces.
Technological: Limited access to high-speed internet, up-to-date devices, or the latest software.
Cognitive: Cognitive impairments, learning disabilities, or language barriers that make the service hard to understand or use.
Economic: Economic limitations that restrict access to necessary technology or internet access.
Geographical: Limited internet access in rural or remote areas, or cultural or language differences in different regions.
Privacy and Security: Concerns about personal data usage or protection.
Having understood user capacity and potential barriers, you should design your service to be as inclusive as possible. This could involve simplifying processes, offering alternative access methods, or providing additional support where needed.
While digital public services offer many benefits, they may not be suitable or preferred for all users. Some users may lack the necessary technology or digital literacy, while others may simply prefer traditional methods. In these cases, consider offering alternative mechanisms for accessing the service, such as phone support or physical locations.
In all aspects of your design process, ensure principles of Gender Equality and Social Inclusion (GESI) are adhered to. It means your service should be accessible and fair to all users, irrespective of their gender, age, ethnicity, disability status, income level, etc.
Example
In a public healthcare service, implementing GESI principles led to the development of a more inclusive appointment scheduling system.
It was noticed that many women, especially from lower-income backgrounds, were unable to attend appointments during standard working hours due to their caregiving responsibilities. By extending service hours to evenings and weekends, and providing childcare services, the service became more accessible to this demographic, significantly increasing attendance rates.
Further reading:
Make sure that the implementation of the Building Blocks and the overall digital government service meets the Web Content Accessibility Guidelines (WCAG) standards for accessibility.
WCAG provides a set of internationally recognised guidelines for creating accessible web content. Familiarise yourself with the guidelines to ensure your service meets the necessary accessibility requirements.
Test the service using accessibility tools to identify and address any accessibility issues. These tools help simulate the experience of users with disabilities and uncover potential barriers.
Consider using screen readers, keyboard navigation, and colour contrast checkers, among other tools, to ensure your service is accessible to all users.
.
If you encounter complex accessibility challenges or require specialised knowledge, consider seeking expert assistance or consultation. Accessibility experts can provide guidance and support in ensuring your service meets the highest accessibility standards. They can help identify and address accessibility issues specific to your service and provide valuable insights throughout the design and development process.
Further reading:
Analyse user demographics data to determine the languages spoken by your users. This data can be obtained through user surveys, usage data analysis, or from market research. You may also find legislation for languages the government must support.
Read more about .
Consider the feasibility and utility of providing multilingual support. This includes the translation of key content, interface elements, instructions, and forms into the most commonly spoken languages of your user base. Automated translation services can be a cost-effective starting point, but human review is essential to ensure cultural appropriateness and accuracy.
Even in the primary language of the service, avoid using complex jargon and technical terms. Use plain, simple language that can be easily understood by a broad audience. This will also facilitate more accurate translations.
Implement language detection based on the user's browser or system settings and offer the option for users to manually set their language preferences.
Use the Accept-Language
HTTP header to detect the language preferences set by the user in their browser or device settings. This can give you a default language to serve to the user initially.
Store the user's language preference (in cookies or user profiles) so that you can load the website or app in their preferred language in subsequent visits or sessions.
Options to switch languages are clearly visible and easily accessible across all pages of the service. Users should not have to search or dig deep into settings to find this option. See the design pattern for .
If you're offering support for languages that are read right-to-left (like Arabic or Hebrew), make sure your user interface can handle that transition seamlessly. This is not just about text direction; UI elements and navigation should also mirror to offer a consistent RTL experience.
Read more about .
Foster a culture of inclusivity within the team and encourage ongoing education and awareness of inclusivity and diversity's best practices. [resource on co-design]
Accessible design isn't just for users with disabilities - it enhances usability for all users. Simple features like captions or clear language can help everyone.
Accessibility isn't just about permanent disabilities. Users may experience temporary or situational impairments, like a broken arm or a bright environment, where accessibility features can improve their experience.
Accessible design is particularly important for older adults who may experience changes in vision, hearing, and motor skills. Designing with accessibility in mind ensures your service is user-friendly for all age groups.
In many places, accessibility is not just an ethical duty but also legally required. Providing accessible services ensures everyone can use your service, regardless of their abilities.
Accessible design often results in more robust and flexible services. By prioritising accessibility, you make your service more resilient to future changes and adaptable to different technologies or platforms.
Incorporating accessibility from the start of the design process is efficient. Retrofitting accessibility features later can be more time-consuming and costly. Make accessibility a foundational part of your design process, not an afterthought.
The notion that citizens expect governments to use formal language may not be entirely accurate. Using consistent, simple, and clear language is a core principle of user-centric design.
By adopting simple language:
You enhance accessibility - Not everyone has the same reading level. Using simpler language makes your content accessible to a broader audience, including those with learning disabilities or for whom English is a second language.
You improve clarity and understanding - Using complex or technical language can be a barrier to comprehension. Simpler language ensures all users can understand the information being communicated.
You save time - When citizens can quickly understand the information they're reading, it reduces the time spent on misinterpretations and questions, leading to a smoother user experience.
Research exercises like highlighter testing have shown that simple language is more effective for comprehension. In these tests, participants highlight parts of the text they find difficult to understand, revealing areas where simpler language could improve comprehension.
Following guidelines like the United Kingdom's government Content Design guidance can assist in creating accessible and easy-to-understand content. Simplicity in language is a tool that enhances inclusivity and usability, reflecting a government that values its citizens' diverse capabilities and time.
Further reading: Testing for UX writers - know when your words are working.
Design the service to work well with other government services, systems, and platforms.
Identify potential integration points within your journey. Collaborate with relevant government agencies and stakeholders to align with broader interoperability initiatives and Building Block recommendations.
Style Guide: A style guide is a reference tool that establishes design and writing standards. It includes branding elements (logos, colour schemes), UI components, and coding standards, ensuring consistency across a product or set of products.
Frontend Framework: This is a structured package of standardised code (HTML, CSS, JS documents, etc.) that forms the foundation for designing a website or web application. Examples include React, Vue.js, and Angular.
Design System: A design system is an overarching guide that includes the style guide and coded UI components (often from a frontend framework). It houses design principles, visual design rules, coded components, and other standards guiding the creation of a range of products or services.
Creating a style guide is a crucial first step in building a comprehensive design system. It defines the visual aspect of the system which can later be expanded to include coded components, UX patterns, and more.
If a style guide does not exist already, here are the steps for its creation:
Audit Existing Designs - Review current services to identify common components and styles.
Create and Document Components - Develop reusable design components (like buttons or headers) and write clear guidelines for their use.
Define Visual Styles - Document your colour palette, typography, grid system, spacing, and other visual styles.
Ensure Accessibility - Incorporate guidelines to ensure your digital services are usable by all citizens, for example, colour contrast between text and backgrounds.
If you do not have a consistent style guide across government the creation, usage and, contribution to a common style guide should be included as a requirement for suppliers. This ensures the supplier's work aligns with the existing style guide and contributes to its improvement or expansion, leading to a more collaborative, efficient, and cohesive design ecosystem. As part of the RFQ:
Include a copy of the style guide if it exists - Make sure that suppliers have a copy of the existing style guide to understand its current state and requirements.
Require adherence to the style guide - Suppliers should demonstrate an understanding of the style guide and show how they will adhere to it.
Encourage contributions - Request that suppliers identify potential improvements or expansions to the style guide during their work, and include a process for proposing and implementing these changes.
GovStack offers a robust set of well-tested patterns for common user journeys. These can serve as a foundational structure for your services, presenting an outline for the necessary pages within your user journey and providing guidance for the content on each page.
While these patterns offer a solid starting point, it is important to note that they do not provide coded or designed components. For implementing these aspects, refer back to our guidance on setting up a front-end framework.
Whether employing GovStack patterns or creating your own, maintaining consistency across your government services is key. After establishing your patterns, make sure to test the end result with users. This will ensure that your service is not only consistent but also user-friendly and effective.
A frontend framework, also known as a CSS framework, is a package consisting of pre-written, standardised code in files and folders. They give developers a base to build upon while still allowing flexibility with the final design. Essentially, a frontend framework provides a structured, reliable, and reusable template.
Efficiency: They save developers a significant amount of time by providing a foundation.
Consistency: By using the same framework, teams can maintain consistency across projects.
Responsiveness: Most frontend frameworks are built to be fully responsive right out of the box.
Community: Popular frameworks have a large community, which can offer support, additional tools and plugins to extend functionality.
Setting up a frontend framework largely involves downloading the framework's files and incorporating them into your development environment. The specific steps may vary depending on the chosen framework.
Considerations for choosing a frontend framework include the framework's community support, ease of use, performance, and compatibility with your project's requirements. One crucial aspect to consider is the level of support for accessibility and right-to-left (RTL) language support.
Here's a comparison of three popular frameworks:
Framework | Accessibility | Community Support | RTL Language Support | Ease of Use/Set Up |
---|---|---|---|---|
Choose a frontend framework fitting your needs as the foundation for a reusable design system. This system, combining the framework's technical foundation with your specific visual styling and design guidelines, will ensure uniformity and accessibility across projects within your organisation. Regular updates to this living system will accommodate evolving needs and interaction patterns, solidifying an efficient and inclusive digital experience throughout government services.
\
React
Strong, with a lot of focus on making the web accessible.
Large and active community, many third-party libraries.
Native RTL support isn't included but libraries such as react-with-direction
can be used.
Moderate, requires knowledge of JavaScript and JSX.
Vue.js
Strong, with dedicated sections in the documentation about accessibility.
Growing rapidly, plenty of resources and libraries available.
No native RTL support, but possible with additional CSS and libraries.
High, simpler syntax and structure compared to React and Angular.
Angular
Strong, built-in accessibility features in Angular Material.
Large and active community, backed by Google.
No native RTL support, but can be implemented with additional CSS.
Low, steep learning curve due to complexity and depth of the framework.
Bootstrap
Good, most components are designed to be accessible.
Very large community, many templates available.
Built-in support with Bootstrap v4.0 onwards.
High, easy to integrate and get started with basic HTML and CSS.
Chakra UI (React based)
Very strong, all components are accessible and WCAG compliant.
Growing community, becoming a go-to choice for accessible component library.
Built-in support with RTL-friendly components.
High, easy to use with good documentation but requires understanding of React.
Consider potential integration points and interoperability requirements early in the design process, involving relevant stakeholders, including the Building Blocks development team.
Identify functionalities that can be provided by other services, for example, departmental registries.
Integrate these services using their APIs.
Test the integrations to ensure they work correctly.
Be prepared to handle any downtime or changes to these services.
Transparency: Working in the open allows stakeholders, other teams, and even the public to understand how decisions are made and progress is tracked. This transparency can build trust and enable informed discussions.
Collaboration: By making your work accessible, other teams can provide input, learn from your approach, and possibly contribute, fostering a collaborative ecosystem.
Reusability: Openly sharing your work means other teams can reuse and adapt your processes, tools, or code, avoiding duplication of effort and accelerating development across the board.
Feedback and improvement: Working in the open invites feedback from a wider audience, which can bring different perspectives and promote continual improvement.
Open source your code: Where possible, and with the necessary security considerations, share your code repositories publicly. This allows other developers to learn from, reuse, or contribute to your code.
Share your processes and decisions: This can be done via blog posts, open meetings, or shared documents that detail your working practices, decision-making processes, and project progress.
Invite feedback: Provide channels for feedback on your work, whether that's through comments on a shared document, feedback on a blog post, or interactions on a code repository.
Promote open standards: Adhere to and advocate for open standards in your work. This not only aids in compatibility and interoperability but also supports a wider culture of openness.
By working in the open, government services can not only build more efficient and user-centric digital services but also foster a culture of collaboration and learning that extends beyond individual teams or projects.
Making decisions based on needs
Choose the right level of security
Optimise load times and page performance
Account for connectivity issues
Make sure citizens’ rights are protected by integrating privacy as an essential part of your system.
Define the data you need to collect for your service.
Use the Privacy by Design framework to integrate privacy controls into your system.
Create a transparent privacy policy that outlines what data you collect, why you collect it, and how it's used and stored. You can use the privacy policy design pattern.
Ensure compliance with any applicable data security and privacy protection laws.
In many contexts where GovStack Building Blocks are being used, internet bandwidth may be slow, therefore it is essential to optimise load times and minimal data transfer.
Use Google Lighthouse to test your web application's performance.
Identify areas for improvement based on the test results.
Implement improvements such as using compressed images, optimising front-end code, leveraging CDNs, etc.
Retest and continue to optimise as needed.
From a design perspective, only use necessary images, optimise images for the web, use CSS and SVG instead of images where possible, minimise the use of different font families, and optimise font loading if custom fonts are used.
Use AJAX/Fetch mechanisms for asynchronous and partial updates of the UI.
Choose proportionate security to control and monitor your technology programme. Security should protect your information technology and digital services, and enable users to access the data they need for their work. GovStack offers specific .
Evaluate the sensitivity of the data you're handling.
Based on the evaluation, choose appropriate encryption methods and robust user authentication systems. Use the Series as a guide:
Authentication Cheat Sheet: This provides guidance on implementing secure authentication systems, which is a fundamental aspect of security.
Session Management Cheat Sheet: This covers the best practices for handling user sessions securely.
Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet: CSRF is a common web application vulnerability that your users should be aware of.
Cross-Site Scripting (XSS) Prevention Cheat Sheet: XSS is another common vulnerability, and this cheat sheet provides guidance on how to prevent it.
Transport Layer Protection Cheat Sheet: This covers how to use SSL/TLS, which is vital for encrypting data in transit.
Input Validation Cheat Sheet: Input validation is an essential measure for preventing many types of attacks.
SQL Injection Prevention Cheat Sheet: SQL Injection is a common and dangerous vulnerability, and this cheat sheet provides guidance on how to prevent it.
HTML5 Security Cheat Sheet: If your users are using HTML5, this cheat sheet covers many of the new security considerations that come with it.
Implement the security measures in your system.
Test and adjust the security measures to ensure they provide the needed protection without overly impeding usability.
Account for connectivity issues in different regions, considering the deployment options provided by the Building Blocks.
Assess Connectivity Conditions and User Needs: Understand the network conditions under which your users will be accessing your service.
Optimise Web Performance: Minimise the size of your resources and fix performance issues.
Implement Progressive Loading: Design your service so that it loads the most critical content first.
Use a Content Delivery Network (CDN): If your users are spread across a wide geographical area, using a CDN can speed up load times.
Utilise Service Workers for Offline Functionality: Service workers can intercept network requests and serve cached responses. Google's Workbox can help with this.
Choose the Right Caching Strategies: For instance, you might cache static resources for faster loading and implement a "network first, then cache" strategy for dynamic content.
Implement Local Storage: Consider storing some data locally on the user's device.
Test Under Low-Connectivity Conditions: Use browser developer tools or network throttling tools to simulate various network conditions.
Test the service, verifying its compatibility with different devices, browsers, and assistive technologies
Identify the range of devices, operating systems, and browsers your users may use.
Use testing tools like BrowserStack or LambdaTest to test your service on these platforms.
Make necessary adjustments to ensure compatibility across platforms.
Continuously test new versions of your service on these platforms.
Follow established standards and guidelines for multi-modal design, ensuring consistency and usability across different interaction modes.
Consider user expectations and preferences for different interaction modes, ensuring inclusivity and accessibility for all users.
Account for potential limitations and constraints of different platforms, systems, and devices, while maintaining interoperability and multi-modality.
Make sure your technology, infrastructure and systems are accessible and inclusive for all users.
Identify the various channels through which users will interact with your service (for example, web, mobile, SMS, call centre, physical location, etc.).
Consider the strengths and limitations of each channel. For instance, certain tasks might be more easily accomplished on a desktop than on a mobile device, or vice versa.
Design your service so that users can easily switch between channels as needed. This might involve making certain data or functionality available across multiple channels, or designing the service so that progress made on one channel can be saved and continued on another.
Consistency is crucial across all platforms. Keep a consistent design language (colours, fonts, layouts) and user experience (navigation, interaction patterns) across all channels.
Consider the use of responsive or adaptive design to ensure your service is usable on a variety of screen sizes and device types.
Get started using design patterns
Map the user journey: Break down the user journey into phases like registration, information collection, appointments, feedback, and messaging. Identify the steps users take in each flow. Consider the technology choices available to you.
Choose user flows: Identify the patterns (task-focused page types) needed for each part of your user journey.
Select page templates: User flow patterns often include several page templates. Start with existing patterns for user flows. For unique requirements, you may need to mix and match individual templates.
Identify Service Needs: use the to understand the key interactions in your service.
Users
• Architect
Steps
Apply
Find the service > Register or authenticate > Submit application (including answering questions and uploading documents) > Receive the outcome of a decision >
If outcome is successful
Get notification for payment > Make payment > Give feedback
Patterns
Register
Authenticate
Asking users for feedback
Find a service
Check a user's eligibility
Make an application
A principle of good user design is reducing friction and making processes as seamless as possible for the user. One way to do this is by avoiding the need for user accounts where possible.
Creating and managing user accounts requires a significant amount of effort from both the user and the service provider. For the user, it's another set of credentials to remember. For the service provider, it's a matter of securely storing and managing that user data.
There are many situations where a service can be designed in such a way that a full account isn't necessary. Consider these alternatives:
One-Time GUID Links: If you need to provide users with a way to return to a specific state in a service (like a partially completed form), consider using one-time GUID (Globally Unique Identifier) links. These can be generated and provided to the user, allowing them to return to their session without needing an account.
Token-based Authentication: For some services, you can use token-based authentication. This could involve sending the user a unique token via email or SMS that they can use to access the service.
Third-Party Authentication: For services that require authentication, consider using third-party authentication services (like Google, Facebook, or Twitter login). This can make things easier for users, as they can use an existing account instead of creating a new one. However, be mindful of the potential privacy implications.
Guest Checkout: For e-commerce situations, consider offering a guest checkout option. This allows users to make a purchase without creating an account.
In all cases, the goal should be to make the user's interaction with the service as easy and smooth as possible, while still maintaining appropriate levels of security and privacy.
Steps
Log in with credentials
Login page > A: Service continues / B: Not authorised page
Log in through provider
Login page > Provider journey > A: Service continues / B: Not authorised page
Reset credentials
Log in page > Reset password > Send reset link > Reset password > Success
Patterns
Use this pattern to help users provide their personal and relevant information to create an account or profile in a system, platform or service.
Registration is useful when you need to give each user a unique identifier in a system or collect their information in one place. This allows users to access, apply for, or meet the requirements for a particular service.
Once a user has been registered, users can log in to their account using their chosen credentials to:
continue using the service,
apply for additional services,
see the services they have used in one place,
update their information, or
modify their account settings
Steps
From a service catalogue
Service catalogue pages > Perception survey
During a service
Service page > Feedback At the end point of a service
End page > Satisfaction page
Patterns
Steps
Register to a create an account in a system
Service page > register > get notified of status
Registered user sign-in to access service
Sign-in > access service
Registered user sign-in to apply for service
Sign-in > eligibility check (if needed) > apply for service > feedback
Patterns
Use case example
Steps
Find through taxonomy
Service catalogue/homepage > Topic page > Service sheet > Service
Search for service
Service catalogue/homepage > Results page > Service sheet > Service Through search engine
Search engine > Service sheet > Service
Patterns
Service catalogue Service sheet Search results
Use this pattern to help users check if they qualify for your service saving them time registering for a service they would not qualify for and redirecting them where possible.
To use this pattern you need to have:
A service information sheet
A series of simple eligibility questions
An understanding of what existing data you can integrate with
Have a web page for your service where people can find the required information about the service. Such as requirements to access the service, actions and steps they need to start using your service. Check an example of a service information sheet.
Ensure to include general rules and information about whether the service can be used such as age limit.
Present the user with a series of simple questions that can determine their eligibility. Use questions when the eligibility criteria for the service is complex and require detailed information to determine if a user qualifies.
Ask the user to provide information such as their age, location, employment status, income level, or other relevant details to your service. Follow the question page pattern.
the system should automatically process the information provided and determine whether the user is eligible to access the feature or service.
Eligibility outcome that clearly states whether the user is eligible or ineligible to access the service or feature.
Reason for eligibility determination such as a clear explanation if the user is found to be ineligible. This can include information such as incomplete or incorrect information provided in the form, not meeting certain age or income requirements, or other criteria.
Next steps depending on the outcome of the eligibility check. For example, if the user is found to be eligible, direct them to the next step which may be to register for the service. If they are found to be ineligible, direct them to another service or further guidance.
Here are some elements that can be included on the outcome page:
Present an outcome page to users to let them know the result of their eligibility check. The outcome page should provide a clear and concise summary of the user's eligibility status. If eligible, you should let the user know of the next steps to access the service. If the user is not eligible, let them know why and what they should do instead.
Eligibility outcome that clearly states whether the user is eligible or ineligible to access the service or feature.
Reason for eligibility determination such as a clear explanation if the user is found to be ineligible. This can include information such as incomplete or incorrect information provided in the form, not meeting certain age or income requirements, or other criteria.
Next steps depending on the outcome of the eligibility check. For example, if the user is found to be eligible, direct them to the next step which may be to register for the service. If they are found to be ineligible, direct them to another service or further guidance.
Services where eligibility criteria can be complex and may vary depending on the specific service or feature being accessed. By using the "check eligibility" pattern, users can quickly and easily determine whether they qualify for a particular service, without having to go through a lengthy application process or wait for manual approval.
Steps
If you need users to check eligibility before applying
Put eligibility screening questions up front
Summary of requirements > Eligibility questions > Upload evidence (if needed) > Check your answers > Outcome
If not eligible
Allows users to fail fast
Patterns
Steps
Task list > Question flow > Evidence (if required) > Check you answers > Outcome
Patterns
Feedback
Perception survey
Satisfaction
Before you start
Service sheet
Asking users for consent
Task list
Asking users for information
Outcome
Invite users to provide feedback about their overall experience with the service catalogue. This could be a dedicated page, a pop-up prompt, or an email sent after a certain period of usage.
Create a form to collect user perceptions about the entire service catalogue. Use a mix of open-ended questions and structured items (like ratings or multiple-choice questions) to gather comprehensive feedback.
Include a clear call-to-action button for submitting the survey. A label like "Submit survey" is straightforward and effective.
After users submit their survey responses, show a success message to confirm receipt. Something simple like "Thank you, your survey has been submitted" works well.
In addition to feedback about specific services, consider collecting:
User ID: If the user is logged in, this can help provide context for the feedback.
Overall Satisfaction: Ask users to rate their overall satisfaction with the service catalogue.
Specific Services Used: Understanding which services a user has interacted with can provide context for their feedback.
Timestamp: Recording when the feedback was provided can help identify any time-related trends.
Suggestions for Improvement: Ask users what they think could improve their experience with the service catalogue.
To gain a deeper understanding of user experience with the government service catalogue, consider including these questions in your feedback form:
How did you find out about the government service catalogue? This can give insights into the effectiveness of your outreach and communication strategies.
Which services have you used? Knowing which specific services a user has interacted with can provide important context for their feedback.
How easy was it to find the services you needed? This can shed light on the effectiveness of your catalogue layout and search functionality.
How would you rate your overall experience with the services you used? This gives a high-level view of user satisfaction with your service offerings.
What improvements would you suggest for the services you used? This open-ended question allows users to provide specific feedback and suggestions for individual services.
What additional services or features would you like to see in the future? This can help guide your future development efforts based on user needs and wants.
This pattern is useful when you want to gather feedback on the overall service catalogue, rather than individual services or interactions. This can help identify strengths and weaknesses across your offerings and improve the user experience at the catalogue level.
Do not use this pattern as a substitute for gathering feedback on individual services or interactions. Users' experiences with specific services can be different from their overall impression of the service catalogue, so both types of feedback are valuable.
At the end point of the user journey, prompt users to rate their experience. This should be a simple rating scale (1-5) or a binary satisfied/dissatisfied question.
Following the rating, provide a text field for users to share more details about their experience. Prompt users with an open-ended question like, "How could we improve your experience?"
Include a clear 'submit' button to finalise their feedback.
Display a success message after submission, thanking them for their feedback.
User Satisfaction Score: The user's response to the satisfaction rating question.
Feedback Text: The user's response to the open-ended feedback question.
Page URL: The URL of the page from which the feedback was submitted.
Session ID: Identifies the particular user session, for associating feedback with specific user journeys.
Consider adding questions to your satisfaction survey to gain deeper insights:
Did you accomplish what you intended to do in this session? Helps understand if the user journey was effective and efficient
Use this pattern at the end of a user journey to collect valuable feedback about user experiences. Be aware that user satisfaction is biased to users that reach an end-point of a service.
This page focuses the user's attention on what they need to complete the service in one go. You might list the documents or evidence required or emphasise certain eligibility criteria (e.g., age).
When there is a risk that there is more information on a start page than users are likely to read you can break the 'before you start' information onto its own page.
Throughout your service journey, prompt users to provide feedback.
This could be a dedicated page that opens in a new tab or a modal within the page. Provide a simple text area where users can input their thoughts and experiences. The prompt should be open-ended, such as "Tell us about your experience" or "How can we improve this page?"
Include a clear call to action button for submitting feedback. "Submit feedback" is an effective, straightforward choice.
After users submit their feedback, display a success message to let them know it was received successfully. This can be a simple statement like "Thank you, your feedback has been submitted."
Page URL: This shows the specific page where the user submitted feedback, giving you context about what their comments may be referring to.
Referrer URL: This indicates the page the user visited before the current one, which could be useful for understanding the user's journey.
Device Information: This includes data like the user's operating system, browser type, and screen resolution, which can be helpful for troubleshooting technical issues.
Timestamp: Recording the time and date of the feedback can help identify issues that occur at specific times.
Session ID: If your system uses session IDs, collecting this can help you associate the feedback with a particular user session.
You should aim to collect feedback whenever possible, it can be helpful in identifying issues or areas for improvement from the user's perspective.
At the end points of the user journey, you should use the satisfaction pattern.
Use this pattern to help users check if they are ready to start a service.
This helps people understand what your service does and whether they need to use it.
A list of things most users need to know: for example, what your service is, what will happen, what users will get or how much it'll cost. To keep the content concise, do not include details about anything that would be obvious to users.
There should be a clear call to action button to start the service, usually “Start now”. You should also include a link to “sign in” or “continue journey” if the user is able to continue an existing journey.
You should provide ways for people who can’t access the service online to get support, for example, by phone or text relay. You may also include details for support channels.
At the start of a service which involves the user inputting information in order to get something. For example, at the start of an application form.
Use these patterns to give users control over how you manage their data. These patterns align with the GovStack Consent Building Block.
Related Building Block: GovStack Consent Building Block.
Consent covers all activities done for the same reason. If you use the data for more than one reason, get separate consent for each. For example, accessing data to check eligibility is separate from accessing data to make an application.
Use these patterns when you need to:
To access a user’s data.
To store, manage or share a user’s data.
Allow users to manage the data you hold on them.
You do not need to ask for consent when:
the data is required for a government to perform a specific task or function set out in law, you do not need to ask for consent. For example, terms and conditions — these are required so you do not need to ask for consent.
a person is simply informed of the processing of the data by the organisation as part of the service provided under contract or by an authority.
consent does not have to be obtained in a situation where the entity does not identify or cannot identify people with reasonable effort.
This pattern allows users to accept or reject a request to access or share data for the use of a service. For example, asking to access data from social insurance records in order to check eligibility for another service.
User journey considerations:
Consent should be grouped by purpose meaning you may need multiple consent pages.
You may need to ask for consent at the start of a service or throughout the user journey.
You should test your service with users to find the journey that works for them.
Explain what data you are requesting and the benefit to the user for sharing that data. For example, “Provide your location data so that we can tailor offers relevant to you”.
Be clear about:
Why you need it and the benefit to the user.
How the data will be used and for how long.
If you are handling a simple data set, present the data you are collecting followed by a checkbox to explicitly confirm consent.
If you are handling multiple data sets allow the user to choose which data is shared.
If you do not need to ask for consent but you are handling user data this should be specified in the privacy statement.
[Add guidance on when that should be presented to the user]
If the service will be accessing or sharing user data on an ongoing basis then you need to give users a method of managing consent. See point 6 of the future considerations of the Consent Building Block.
[Give details for how to manage data if the user changes their mind.]
In cases where the person giving consent is doing so on behalf of someone else, as a guardian or carer.
[This pattern is in the backlog]
When asking users information during a questions flow consider using progressive disclosure drop-downs or inline content to explain why you are asking for that information and how you will handle the data.
You may need to offer users an opportunity to review and correct data using the .
Use this pattern to gather information from users using your service.
Clearly state why you need the information and how it will be used. For instance, you might need users' information so that you can provide a service, register them for a service, or tailor an experience to meet their needs.
Consider whether you need to ask for the information or whether you can use integrations to get that data from internal or external sources. Check whether you need to ask for consent to ask the questions.
You can use a question protocol to help you figure out what you need to ask. If you ask people for optional information, add ‘(optional)’ to the question. Do not mark mandatory questions with asterisks as these are not accessible.
Backlink
A question page should have a backlink to help users who may want to go back to the previous question to make any changes.
Question or question heading
When asking people for information, ask for one thing at a time. Helps users focus on one specific thing at a time and provide its answers without overwhelming them with too many demands at once.
This can be one question per page or group-related questions together, for example, contact details. Grouping related questions together can help users understand the context of each question and make it easier to provide accurate responses. When you group related questions together, you will have a ‘question heading’ that will help people understand what is needed for the set of questions.
Hint text
Provide clear instructions to help users understand what is expected of them on each question.
Question field
Use the appropriate question field for the different question types.
Other ways to provide information
Always provide other alternative mechanisms for users to identify themselves or provide their information so that they can access your service. Provide clear instructions on what to do if they encounter any problems.
In long complex forms and transactions that involve multiple steps and pages, help users understand the list of tasks involved and their progress as they move from one question to the next.
Use clear labels for each step and provide a visual indicator of their progress. Show the order in which the steps should be completed and mark completed tasks.
Group similar tasks together and use clear headings to explain what is involved or is needed to complete the task.
As users complete each step or task, show a label that describes their progress.
Use visual and written labels to indicate their progress. Avoid relying solely on visual indicators like progress bars or percentages, as they may pose accessibility challenges. Include a text display of progress as well.
Maintain a sense of hierarchy: If there is a specific order in which the steps should be completed, make it evident to the users. You can indicate the hierarchy by organising steps in a logical sequence or visually nesting them within each other.
If the form or transaction is long, provide a save feature that allows users to pause and continue later. When users resume the transaction, display the task list page as the first thing they see.
In a complex long form that involves multiple tasks, steps or pages.
Use this pattern to help users check, review and confirm their entered information before taking a significant final action, such as submitting.
By allowing users to review, make changes, or confirm their answers, this pattern helps prevent errors in data submission.
The long term benefits of this pattern are:
users will be confident while using your service as they can visually confirm that all their information has been accurately captured.
reducing incorrect and incomplete information will result in lower error rates in applications.
Use this pattern when you need to capture information from users on a form that spans multiple pages or steps.
Use a clear heading that clearly communicates the purpose of the page, such as "Check your answers before sending your application"
Show the summary of questions and answers given. Ensure the information is organised and easy to scan.
Consider the type of answers expected from users. For longer answers, utilize a full-width layout to accommodate the content. For shorter answers, a two-thirds layout may be appropriate to optimise space.
Break content into sections if needed. If there is a large number of questions or if it improves the clarity, divide the content into relevant sections.
Clearly indicate when a question has not been answered because it was optional. Make it evident to the user that the answer was not provided.
Provide navigation to edit answers. Offer users a straightforward way to navigate back to previous steps and edit their answers if needed. This can be achieved through direct links or buttons that allow users to easily access and modify specific questions.
Include a call to action button at the bottom of the page that helps the user take the final action such as submitting their application.
Use this page to provide users with confirmation that they have successfully completed their intended task. It helps users know that their actions have been successfully processed.
Outcome page
How it Works
On this page include:
Details on what will happen next.
Contact details to provide users with further support.
The guidance and patterns in this document draw inspiration from the valuable contributions of other organisations which are referenced below.
The good practice guidelines have been influenced by service quality standards like:
The service patterns have been highly influenced by public sector design systems such as:
\
A historical log of key decisions regarding this Building Block.
A list of users flows and patterns to add to these UX/UI specifications in the future
A list of topics that may be relevant to future versions of this Building Block.
A list of functions out of the scope of this Building Block.
Building blocks (BBs) are software modules that can be deployed and combined in a standardized manner. Each building block is capable of working independently, but they can be combined to do much more.
Building blocks are composable, interoperable software modules that can be used across a variety of use cases. They are standards-based, preferably open-source, and designed for scale.
Each building block exposes a set of services in the form of REST APIs that can be consumed by other building blocks or applications.
Please browse the building blocks that are specified in this release in the menu to the left.
Publication date: December 6th, 2023
The GovStack initiative is proud to announce the release of the 23Q4 publication! Since the 1.0 publication in May 2023, the community has worked hard to add a substantial amount of new content, including 3 Building Blocks, 10 Reference Use Cases, UI/UX Guidelines, and a pre-release of the GovStack Sandbox documentation. These new additions allow us to continue progressing towards our goal of providing a common set of reference use cases, technical building blocks, and associated tools to accelerate and improve digital system design and implementation.
This publication was made possible by dozens of people, many of whom are volunteers, who were willing to share their expertise and time towards helping to build GovStack. The proficiency and diversity of this group of contributors strengthens the value of this work and we are extremely thankful for their involvement.
The E-Marketplace Building Block enables the trade of products and/or services via electronic media where at least one of the transacting parties taking part in the transaction is the government (either as the consumer or the provider) and the other party or parties can be either pre-qualified providers or consumers of such goods and services (respectively).
The E-Signature Building Block provides the necessary functionalities to bring handwritten signatures to the digital world, improving the user experience of managing the signing process and reducing the need to print out forms, sign on paper, scan, and upload documents.
Geographic Information System Building Block
The Geographic Information Services (GIS) Building Block enables various applications with location-based capabilities. By integrating a wide range of spatial data, such as maps, imagery, and location-based services, users can access and process geospatial data from different sources and link geographic locations to various "objects" within an open information technology environment
The UI/UX guidelines provide guidance to kick-start the design and development of services that use and combine GovStack applications and Building Blocks, as well as other components while maintaining a seamless and consistent user experience.
Market Linkage: Services which link buyers to sellers in the agriculture sector
Rural Advisory Service for Farmers: Equipping farmers with the information, knowledge, and skills needed to improve their farms and get better yields
Disaster Management: Implementation process of a disaster management system
Remote Learning: Delivering engaging and effective educational content to students, facilitating seamless communication between educators and learners, enabling interactive assessments, and supporting continuous learning in a virtual environment
Business Taxation: Digital transformation of tax administration (e-taxation) in the business sector
Inclusive Financial Services for SMEs: Supporting access to financing for small-medium enterprises (SMEs) through streamlined loan programs targeting SMEs
Telemedicine: Extending the reach of traditional health systems to help meet ambitious national health targets
Smart Vaccination: Empowering immunization efforts by enhancing distribution efficiency, optimizing resource utilization, promoting equitable access to vaccines, and ensuring the integrity of the vaccine supply chain
Pandemic Response: Digitalizing pandemic response operations and management systems to ensure effective management during a pandemic
Anticipatory Cash Transfers: Leveraging digital tools to streamline Anticipatory Cash Transfers in climate disaster scenarios
General
Added information on designing adaptors and integrating with the GovStack testing harness (non-functional requirements section 6)
Minor updates to BB design principles (section 4)
Added new section on UX switching and handover to non-functional requirements (section 8 )
Updated GovStack architecture (section 3) of non-functional requirements. Added an overall GovStack architecture diagram and description
Updates to security specification - removed section on security building block modules and replaced with new ‘Authorization Services’ and ‘Additional Security Modules’ (sections 7 & 8 )
Added new standards section (section 6)
Added UI/UX guidance (was originally a Wave 3 Building Block, now integrated in the cross-functional architecture documents)
Highlights
Improvements to API testing
Changes to Service APIs to improve clarity and consistency
Updates to Terminology to improve clarity
Highlights
Improvements to API testing
Updates to naming of APIs to be consistent with GovStack standards
Updates to Cross-Cutting Requirements for consistency with Architectural guidelines
Highlights
Modified the example steps in multi-step enrollment requirements
Added requirement for porting identity data from existing database during enrollment
Added data structures for the enrollment API
Added enrollment API definition from enrollment.yaml file
Added enrollment workflow for fresh enrollment and enrollment reusing existing database
Added workflow for presence only verification
Highlights
Updates to naming of APIs to be consistent with GovStack standards
Errors fixed in the Subscribe workflow
Event acknowledgment added in event delivery workflow
Highlights
Improvements to API testing
Updates to naming of APIs to be consistent with GovStack standards
Updates in all headings for better readability and structure
Updates to diagrams and tables, including a new workflow diagram
Highlights
Updates to naming of APIs to be consistent with GovStack standards
Updates to the APIs for the Account Mapper, Bulk Disbursements, and Vouchers
Additional APIs to support the P2G feature
Additional terms added to the Terminology section
Enhancements to the P2G Functional Requirement
Addition of the “Billers Table” in the Data Structures section
Updates to the P2G workflows in the Internal Workflows section
Highlights
Improvements to API testing
Updates to naming of APIs to be consistent with GovStack standards
Updates to Cross-Cutting Requirements for consistency with Architectural guidelines
Updates to the Key Digital Functionality and Functional Requirements sections to improve clarity
Highlights
Updates to the API to support deletion of multiple appointments
Improvements to the Functional Requirements section related to booking multiple events/appointments
Highlights
Improvements to API testing
Updates to naming of APIs to be consistent with GovStack standards