Archer Certified Administrator-Expert Questions and Answers
What should you do if the LDAP status remains QUEUED for an extended period?
Options:
Reconfigure the LDAP settings
Restart the server
Check the network connection
Ensure the Archer LDAP synchronization service is running
Answer:
DExplanation:
In the Archer environment, LDAP synchronization is an asynchronous process managed by the Archer Support Tools and specific Windows services. When an administrator triggers a manual LDAP sync or a scheduled sync begins, the job is placed into a processing queue within the Archer database. If the status remains "QUEUED" and never progresses to "IN PROGRESS" or "COMPLETED," the primary culprit is almost always the underlying service responsible for picking up that task.
As detailed in the Archer Administration II technical troubleshooting modules, the Archer LDAP Synchronization Service (or in newer versions, the Archer Queuing/Job Engine service) must be in a "Started" state on the web or application server to process these requests. If the service is stopped, hung, or crashed, the job will sit in the queue indefinitely. While network connections (Option C) and configuration settings (Option A) are important, they typically result in an "FAILED" status with an error log rather than a "QUEUED" status. Restarting the entire server (Option B) is an inefficient "last resort" that doesn't address the specific root cause as directly as verifying the service status in the Services console.
How does the pre-built content of the Question Library vary from one organization to another?
Options:
The pre-built Question Library content is the same for any number of use cases licensed.
The pre-built Question Library content varies based on the support model selected.
The pre-built Question Library content depends on the deployment option used.
The pre-built Question Library content depends on the use cases licensed.
Answer:
DExplanation:
The Archer Question Library is a centralized repository of questions used to build questionnaires. According to the Questionnaires and Assessment Maintenance curriculum, the "Out of the Box" (OOTB) content provided by Archer is modular. When an organization licenses a specific Use Case (e.g., IT Security Risk Management, Third Party Governance, or Business Continuity), the relevant question sets for those domains are imported into the library.
For example, an organization that only licenses Third Party Governance will have a Question Library populated with vendor-related assessments (like SIG or ISO-based questions), but they will not see Business Continuity-specific questions unless that use case is also licensed. This ensures that the platform remains uncluttered and relevant to the specific business needs of the organization. The deployment option (On-Premise vs. SaaS) and the support model have no impact on the actual metadata content of the library; it is strictly driven by the licensed functional modules.
When building a Calculated field, the administrator is struggling to remember if the COUNT or COUNTA function is the right one to use for a given purpose. What is the best approach for the administrator to take to solve this dilemma?
Options:
Create multiple fields, one using COUNT and one using COUNTA, and from there determine the correct one to use.
Use the Help section within the Formula Builder to learn about each of these functions and see some examples of syntax for each.
Consult with Archer Professional Services whenever Calculated fields are required.
Open a ticket with Archer Support for guidance.
Answer:
BExplanation:
The Archer Formula Builder is equipped with a built-in documentation tool specifically designed to assist administrators with syntax and logic. As taught in the Archer Administration II course, clicking the "Help" or "Info" icon within the Formula Builder interface provides a searchable library of all available functions.
For the COUNT vs. COUNTA dilemma, the Help section clarifies that COUNT is typically used for numeric values, whereas COUNTA (Count All) counts any cell that is not empty, including text and dates. This built-in resource provides the exact syntax and common use-case examples, making it the most efficient and standard "best practice" for self-service troubleshooting. Options C and D are inefficient for such a common task, and Option A is a "trial and error" approach that can lead to database clutter and performance issues if calculations are built incorrectly during the testing phase.
The HR department needs to update their business hierarchy. The most updated instance of the business hierarchy is stored in Archer. They have no direct access to Archer. As an administrator, you are asked to extract this data and load it into an external database. Which feature allows you to do so?
Options:
Data Publication
Data Gateway
Data Feed
Data Import
Answer:
AExplanation:
While the Data Feed Manager (Option C) is primarily used to bring data into Archer, the Data Publication Service (DPS) is the specific tool designed to push data out of Archer into an external SQL Server database. As taught in the Data Integration module of Administration II, Data Publication allows an administrator to define a "Publication Task" that maps Archer applications and levels to external database tables.
This is the ideal solution for Sarah's HR request because the HR department can then point their own reporting tools or databases to that external "target" database without ever needing an Archer login. Data Gateway (Option B) is a legacy term or refers to specific API connectors, and Data Import (Option D) is strictly for ingestion. Data Publication ensures that the most recent "Golden Record" of the business hierarchy in Archer is synchronized to the external environment on a scheduled basis, maintaining data consistency across the enterprise.
What will happen if the source file for a Data Import contains a row missing required field content for the target application?
Options:
The Data Import fails unless the row with the missing required field content is the last row in the file.
An administrator must process the file using administrator override.
The Data Import fails without creating or updating any content records.
The Data Import logs a warning message asking the user to check the source file and processes successfully for the rest of the complete rows.
Answer:
CExplanation:
The Data Import tool is designed for strict adherence to application business rules. According to the Archer Administration II curriculum, required fields are mandatory for the creation of any record. If a source file contains even a single row that is missing data for a "Required" field, the validation engine will trigger an error.
Unlike the Data Feed Manager , which can be configured to "Skip" errors and continue, a standard Data Import is an "all or nothing" operation by default when it encounters critical structural failures like missing required data. The import process will stop, and no records from that file will be committed to the database. This ensures that the application does not end up with "broken" records that bypass the logic intended by the administrator. To fix this, the administrator must either populate the missing data in the source file or temporarily make the field "Not Required" in the application builder before re-attempting the import.
Details such as installation history, applications, solutions, jobs, Top 10 field histories are viewed in:
Options:
Installation Report
Access Control Report
Instance Report
Application builder Report
Answer:
CExplanation:
The Instance Report is a comprehensive diagnostic document that provides a "snapshot" of an entire Archer instance. According to the Archer Administration II curriculum, this report is found within the Archer Control Panel (ACP) or can be generated from the Administration workspace.
It is specifically designed to aid in troubleshooting and system auditing. It contains metadata about the installation history , a list of all applications and solutions , and the status of background jobs . Most importantly, it includes performance-related data such as the Top 10 field histories (identifying which fields are growing the fastest in the database) and record counts. This report is often requested by Archer Support when investigating system-wide performance issues, as it aggregates critical architectural data into a single, readable view that is more holistic than an Access Control or Application Builder report.
Which of the following can Data Imports populate data in?
Options:
Questionnaires and Sub-Forms
Applications, Sub-Forms and Questionnaires
Applications and Sub-Forms
Applications and Questionnaires
Answer:
CExplanation:
The manual Data Import tool has specific structural limitations compared to the more robust Data Feed Manager. As taught in Archer Administration II , Data Imports are designed to handle standard Applications and their associated Sub-Forms .
When importing into an application that contains a sub-form, the administrator can map source columns to the sub-form fields to create multiple sub-form line items for a single parent record. However, Questionnaires (Option A, B, and D) cannot be populated via the standard Data Import tool. Questionnaires utilize a unique "Target/Assessment" architecture that requires either manual launching via a Campaign or the use of the Data Feed Manager with a specialized "Questionnaire" transporter. Because the Data Import tool lacks the ability to handle the complex relationship between the Question Library, the Questionnaire Template, and the Target record, it is restricted to standard application and sub-form data ingestion.
If a valid global search returns no results, where is that logged?
Options:
Configuration service log file
Queuing service log file
Job framework log file
Data feed service log file
Answer:
CExplanation:
In the Archer architecture, search operations—particularly Global Search and Indexing —are handled by the Search service, which operates under the broader Job Framework . According to the Archer Installation and Troubleshooting guide, when a user executes a search that is technically "valid" (meaning the syntax is correct) but fails to return expected results or fails during execution, the details are captured in the Job Framework log files .
These logs provide insight into how the Search service is interacting with the Lucene indexes. If the index is corrupt or if the search query is timing out, the Job Framework logs (usually found in the \Logs directory on the Services server) will contain the specific stack traces or warnings. The Configuration Service (Option A) only logs system-level startup and ACP connectivity, and the Data Feed Service (Option D) is irrelevant to UI search queries. Reviewing the Job Framework logs is the standard first step for administrators when users report that "search isn't working" despite records clearly existing in the system.
Which of the following allows you to export record data to a preconfigured Microsoft Word document?
Options:
Mail Merge Templates
Scheduled Report Distributions
On Demand Notification Templates
Subscription Notifications
Answer:
AExplanation:
Mail Merge Templates are the specific feature in Archer designed to bridge the gap between Archer record data and Microsoft Office documents. According to the Archer Administration II curriculum, these templates allow administrators to upload a Word .docx file containing "mail merge tags" that correspond to Archer field aliases.
When a user triggers the mail merge (via the Export button on a record), Archer dynamically replaces those tags with the actual data from the record and generates a finished document. This is the standard method for producing formal reports, certificates, or letters that require specific corporate branding and formatting that cannot be achieved with a standard CSV or PDF export. On Demand Notification Templates (Option C) are for emails, and Scheduled Report Distributions (Option B) are for sending saved Archer reports on a timer, not for creating formatted Word documents based on a single record's context.
The Open Data Protocol (OData) allows ...
Options:
The RESTful API to perform Advanced Search and subsequently run reports.
The SOAP API to narrow down the responses sent back from the server.
The RESTful API to narrow down the responses sent back from the server.
The SOAP API to perform Advanced Search and subsequently run reports.
Answer:
CExplanation:
The RESTful API in Archer utilizes the OData (Open Data Protocol) standard to provide powerful querying capabilities. According to the Archer Administration II integration documentation, OData allows developers to use specific URL parameters—such as $filter, $select, $top, and $orderby—to refine the data returned by the API.
Without OData, a REST call might return every field for every record in an application, leading to significant overhead and slow response times. By using OData, the client can "narrow down the responses" by requesting only specific fields (using $select) or only records that meet certain criteria (using $filter). This is fundamentally different from the SOAP API (Options B and D), which relies on structured XML search requests. OData is what makes the RESTful API efficient for mobile applications and external integrations that require specific, lightweight data payloads.
Which API(s) enable the automated creation of packages?
Options:
All of the Archer APIs do this
SOAP API
Content API
RESTful API
Answer:
DExplanation:
The Archer Packaging API is a component of the modern RESTful API suite. As organizations move toward DevOps and automated deployment models, the ability to programmatically create and export Archer packages (the containers used to move applications between Dev, Test, and Production) has become essential.
According to the Archer Administration II curriculum, the RESTful API provides endpoints specifically for the Packaging Service . This allows administrators to write scripts that automatically bundle application changes into a .zip package without manually navigating the "Packaging" menu in the Archer UI. The SOAP API (Option B) is primarily focused on record data and search services and does not have comprehensive packaging capabilities. The Content API (Option C) is strictly for record-level content (CRUD operations) and does not interact with the system's metadata packaging engine. Therefore, the RESTful API is the correct tool for automating the "Package Creation" workflow.
Which of the following is true about the report content in a Scheduled Report Distribution?
Options:
Content in the attached report will be updated in the recipient's inbox with real-time data.
Content in the attached report varies based on access rights granted to the recipient.
Content in the attached report depends on the access rights of the user who created the report.
Content in the attached report depends on the access rights of the administrator who built the Scheduled Report Distribution.
Answer:
DExplanation:
Scheduled Report Distributions behave differently than standard "On-Demand" reports. According to the Archer Administration II curriculum, when a report is scheduled for distribution (e.g., emailed as a PDF or Excel file), the Archer platform must "impersonate" a user to run the query and generate the file.
The platform uses the security context of the user who configured the Scheduled Report Distribution (typically an administrator). This means that all recipients of the email will see the exact same data—the data that the administrator is authorized to see. This is a critical security consideration: if an administrator has "System Administrator" rights and schedules a report for a group of low-privilege users, those users may see sensitive data in the attachment that they would not be able to see if they logged into Archer directly. Option B is incorrect because the system does not run a separate query for every individual recipient's unique permissions at the time of the email burst.
Where are LDAP-related errors logged?
Options:
Job framework log file
Data Feed Service log file
Queuing service log file
Configuration service log file
Answer:
AExplanation:
LDAP Synchronization is an asynchronous task managed by the Archer Job Engine . According to the Archer Installation and Troubleshooting guide, all tasks that are processed by the background Job Engine—including Recalculations, Notifications, and LDAP Syncs—capture their detailed execution data and error stack traces in the Job Framework log files .
These logs are typically found on the Services server in the \Logs directory (e.g., Archer.JobFramework.log). When an LDAP sync fails (perhaps due to a service account lockout or a network timeout reaching the Domain Controller), the error will not appear in the Configuration Service (Option D), which only handles ACP settings, nor the Queuing Service (Option C), which only manages the "hand-off" of tasks. The Job Framework log is the granular technical record that administrators must consult to identify the specific LDAP error codes (like "52e" for invalid credentials) returned by the directory server.
What is the first step in creating a calculated field?
Options:
Writing the calculation formula in Archer
Enable the calculated field option from the field's properties
Creating a User/Groups List field
Validating your formula
Answer:
BExplanation:
In the Archer platform, a "Calculated Field" is not a separate field type you select from the initial list (like Text or Date). Instead, almost any standard field type can be transformed into a calculated one. According to the Archer Administration II curriculum, the foundational step is to create the field first and then go into the field’s General Tab properties to check the box labeled "Calculate its value."
Once this option is enabled, the "Formula" and "Calculation Properties" tabs appear, allowing the administrator to define the logic. You cannot write a formula (Option A) or validate it (Option D) until the field has been designated as a calculated field. Option C is irrelevant as a "first step," as calculations can be applied to many field types, not just User/Groups lists. By enabling the calculation property first, you signal to the Archer Calculation Engine that this field's value will be managed by the system rather than by manual user input.
Your organization has recently bought a new Archer use case. What steps need to be taken in order to gain access to the new use case in your environment?
Options:
Update the license key within Archer Control Panel.
Nothing, the use case will automatically appear after purchase.
Perform an upgrade of Archer
Contact your account manager who will give you access to the necessary files.
Answer:
AExplanation:
Archer's functionality is governed by a License Key . As taught in Archer Administration II , all the code for all use cases usually exists within the installer, but they are "locked" based on your organization's specific entitlements.
When a new use case is purchased, Archer provides a new alphanumeric license key. The System Administrator must log into the Archer Control Panel (ACP) , navigate to the "Instance" settings, and update the license key field. Once saved, the new use case—including its applications, workflows, and reports—becomes available for installation or activation. You do not need to perform a full system upgrade (Option C) or wait for a "push" from Archer (Option B). While you may need to download the specific Package File (.zip) for the use case from the Archer Community, the "step to gain access" and unlock the rights within your specific instance is the application of the license key in the ACP.
Which statement is NOT true regarding Bulk Update of Advanced Workflow jobs?
Options:
Bulk update jobs will automatically skip records enrolled in a workflow but already in an error state.
If there are workflow jobs in progress, you will receive a warning which tells you how many jobs may be affected by this update.
Only one update job can be run at the same time.
Making changes to a workflow can make existing jobs fail.
Answer:
AExplanation:
The Bulk Update Jobs feature is used to migrate active records from one version of an Advanced Workflow (AWF) to another. As taught in Advanced Workflow Beyond the Basics , Archer is designed to be cautious but thorough. Statement A is NOT true because the Bulk Update process actually attempts to evaluate all records targeted for migration. If a record is in an error state, the Bulk Update tool provides an opportunity to see if the new workflow version can resolve that state or if it remains "Incompatible."
Statements B, C, and D are all accurate reflections of the platform's behavior. Archer will indeed warn you about the volume of affected records (B), and the system enforces a "one-at-a-time" rule for Bulk Updates (C) to prevent database deadlocks and performance spikes. Furthermore, it is a known risk (D) that structural changes (like deleting a node where records currently reside) can cause those specific jobs to fail during or after an update. Therefore, administrators must use the "Compatibility" check within the Bulk Update interface to identify and resolve these issues before finalizing the migration.
What action should never be completed using the Advanced Workflow Job Troubleshooting tool?
Options:
Canceling a job.
Manually moving a record to the next node.
Editing the Advanced Workflow.
Restarting a job.
Answer:
CExplanation:
The Advanced Workflow Job Troubleshooting tool is a runtime utility designed to manage individual "instances" of records currently enrolled in a workflow. It is used to fix records that are stuck due to errors. According to the Advanced Workflow Beyond the Basics guide, this tool is purely for operational maintenance (Cancel, Reset, Restart, or "Force" movement).
Editing the Advanced Workflow structure (changing the flowchart, adding nodes, or modifying logic) cannot be done within the Troubleshooting tool. Workflow design changes must be made in the Application Builder under the Workflow tab. Attempting to "fix" a logic error by changing the design is a development task, whereas the Troubleshooting tool is an administrative task for existing data. Furthermore, editing a workflow requires saving a new version and potentially migrating active jobs, a process entirely separate from the record-level "Reset/Cancel" functions found in the Job Troubleshooting interface.
Which button on the record can you use to invoke a Mail Merge Template?
Options:
Export
Related
Extract
Answer:
AExplanation:
The Mail Merge functionality in Archer allows administrators to take record data and push it into a pre-formatted Word or PDF template. This is a common requirement for generating formal "Exception Letters" or "Audit Reports."
As taught in the Archer Administration II curriculum, when a user is viewing a record and wishes to generate one of these documents, they must click the Export button. Upon clicking Export, the user is presented with several options: standard exports (like CSV or Rich Text) and any Mail Merge Templates that have been associated with that specific application and made available to the user's role. Option B (Email) is used for sending on-demand notifications, not for generating formatted document templates based on Word/PDF layouts.
When installing Archer on a single server, what should you ensure?
Options:
All components (web application, services, Instance database, AWF service) are installed and configured on the same server
Components are installed on separate servers
The AWF service runs on a different server
The instance database is in the cloud
Answer:
AExplanation:
In a Single-Server Installation (typically used for Sandbox, Development, or small-scale environments), the goal is to consolidate the entire Archer architecture onto one machine. According to the Archer Administration II setup guides, this means the Web Server (IIS), the Archer Services (Job Engine, Indexing), the Advanced Workflow (AWF) service, and the SQL Server Instance Database must all reside on that single host.
While production environments favor a distributed model (Option B) for performance and redundancy, a single-server setup requires that all roles are configured to point to "localhost" or the local server's IP. The administrator must ensure the server has sufficient CPU and RAM to handle the overlapping resource demands of the SQL database engine and the Archer web services simultaneously. Options C and D describe "Distributed" or "Hybrid" models, which contradict the definition of a single-server installation.
Which statement is NOT true for the Archer APIs?
Options:
The RESTful API can use XML for the API request and response.
The RESTful API can use JSON for the API request and response.
The SOAP API can use JSON for the API request and response.
The Content API can use XML for the API request and response.
+1
Answer:
CExplanation:
According to the Archer API Guide and Administration II materials, the Archer APIs have specific formatting requirements based on their architecture. The SOAP API (Simple Object Access Protocol) is strictly bound to the XML-based SOAP protocol. It uses a Web Services Description Language (WSDL) to define the structure of its messages, and these messages must be formatted as XML . It cannot process or return JSON payloads.
In contrast, the RESTful API is more flexible; while it defaults to JSON for modern integrations, it is capable of supporting both XML and JSON depending on the "Content-Type" and "Accept" headers provided in the request. The Content API , which is a specific subset of the Archer RESTful infrastructure, also follows these multi-format capabilities. Therefore, the statement that the SOAP API can use JSON is the incorrect one. For administrators building integrations, understanding this distinction is vital, as modern web applications typically prefer JSON, but legacy Archer SOAP services will reject any request that is not valid XML.
Which users are capable of performing data imports?
Options:
Only System Administrators
Only Report Administrators
Members of the Data Import group
Any user with proper access role permissions
Answer:
DExplanation:
In Archer, the ability to perform a data import is governed by Access Roles , not strictly by high-level system administrative accounts. Under the "Rights" tab of an Access Role configuration, there is a specific permission labeled "Data Import." According to the Archer Administration II curriculum, any user assigned to a role with this right enabled—and who also possesses "Create" or "Update" permissions for the target application—can utilize the Data Import tool.
While a System Administrator (Option A) inherently has these rights, the platform is designed for delegated administration. Therefore, a business user or a "Power User" can be granted the specific ability to import data without having full system-wide control. Options B and C are incorrect because "Report Administrators" focus on dashboard and report metadata, and while an administrator might create a custom group named "Data Import," no such hard-coded group exists by default in the Archer out-of-the-box security model. The granularity of Archer's security ensures that data integrity is maintained by linking the import capability directly to the user's functional role and application-level permissions.