Workday Pro Prism Analytics Exam Questions and Answers
You want to configure access to a published Prism data source to use it in reporting and discovery boards. What action must you take?
Options:
Edit the data source security and select a domain.
Share the dataset with appropriate users.
Share the imported Workday report to provide users with access to the published Prism data source.
Schedule the recurring publish process.
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, configuring access to a published Prism data source for use in reporting and discovery boards requires managing its security settings. According to the official Workday Prism Analytics study path documents, the necessary action is to edit the data source security and select a domain (option A). After a dataset is published as a Prism data source, access is controlled through security domains. By editing the data source security and assigning it to an appropriate security domain (e.g., a domain that grants access to specific user groups like report writers or analysts), you ensure that authorized users can access the data source for reporting and discovery boards. This aligns with Workday’s configurable security framework, ensuring that only users with the appropriate permissions can view or use the data source.
The other options are incorrect:
B. Share the dataset with appropriate users: Sharing the dataset itself does not grant access to the published Prism data source; access to the data source is controlled through its security settings, not the dataset’s sharing settings.
C. Share the imported Workday report to provide users with access to the published Prism data source: Sharing an imported Workday report does not affect access to the Prism data source; the data source’s security must be configured directly.
D. Schedule the recurring publish process: Scheduling a recurring publish process ensures the data source is updated regularly, but it does not configure access for reporting or discovery boards.
Editing the data source security and selecting a domain is the critical step to enable access for reporting and discovery boards.
While viewing your lineage, you realize you have forgotten to add a description to some of your derived datasets. From the lineage, you double-click on a dataset to view the dataset details. What is the next step to add the missing descriptions?
Options:
Select the pencil icon next to the dataset name and Edit Transformations.
Select the pencil icon next to the Import stage to update the description.
Select Related Actions next to the dataset name and Edit Transformations.
Select Add Field from the dataset details to create a description.
Answer:
CExplanation:
To add or update the description of a derived dataset in Workday Prism Analytics, you should access the Edit Dataset Transformations task. This can be done by selecting the Related Actions next to the dataset name and choosing Edit Transformations. This method allows you to modify various aspects of the dataset, including its description.
This process is outlined in the Workday Prism Analytics User Guide, which states:
"If you have permission to edit a dataset, you can access the Edit Dataset Transformations task using these methods:
• Right-click the dataset name on the Data Catalog report and select Edit Transformations.
• Select Edit Transformations from the Quick Actions on the View Dataset Details report.
• Access the Edit Dataset task and select the dataset name that you want to edit."
Once in the Edit Dataset Transformations task, you can update the dataset's description by clicking on the configuration icon (often represented as a gear or pencil icon) and editing the description field.
When should a Prism configurator leverage advanced filter logic over basic filter logic?
Options:
The filter needs to remove NULL values.
The filter needs to use operators such as "equal to" or "not equal to".
The filter needs to leverage operators such as "greater than or equal to" or "less than or equal to".
The filter needs a combination of AND/OR operators.
Answer:
DExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, filters in a derived dataset can be applied using either basic (Simple) or advanced filter logic. According to the official Workday Prism Analytics study path documents, a Prism configurator should leverage advanced filter logic over basic filter logic when the filter needs a combination of AND/OR operators (option D). Basic filter logic (Simple Filter) allows for a list of conditions with a single operator ("If All" for AND, "If Any" for OR), but it cannot handle nested or mixed logical expressions (e.g., Condition1 AND (Condition2 OR Condition3)). Advanced filter logic, on the other hand, supports complex expressions with combinations of AND and OR operators, enabling more sophisticated filtering scenarios.
The other options do not necessitate advanced filter logic:
A. The filter needs to remove NULL values: Removing NULL values (e.g., using ISNOTNULL(field)) can be done with a Simple Filter using a single condition, so advanced logic is not required.
B. The filter needs to use operators such as "equal to" or "not equal to": These operators are supported in Simple Filters, so advanced logic is not necessary.
C. The filter needs to leverage operators such as "greater than or equal to" or "less than or equal to": These comparison operators are also supported in Simple Filters, making advanced logic unnecessary for this purpose.
Advanced filter logic is specifically required when combining AND and OR operators to create complex filtering conditions, providing the flexibility needed for such scenarios.
A Prism data writer needs to create a new Prism calculated field on a derived dataset using the CASE function. When creating a calculated field, what symbol do you use to view a list of fields that you can select from in the dataset?
Options:
[
(
#
{
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, when creating a calculated field in a derived dataset, users often need to reference existing fields in the dataset within their expressions, such as in a CASE function. According to the official Workday Prism Analytics study path documents, to view and select from a list of available fields in the dataset while building a calculated field expression, the user types the [ symbol (left square bracket). This symbol triggers a dropdown list of all fields in the dataset, allowing the user to select the desired field without manually typing its name, reducing the risk of errors. For example, typing [ and selecting a field like "Employee_ID" will insert [Employee_ID] into the expression, which can then be used in the CASE function logic.
The other symbols do not serve this purpose:
B. (: Parentheses are used for grouping expressions or defining function parameters, not for field selection.
C. #: The hash symbol is not used in Prism Analytics for field selection; it may be associated with other functionalities in different contexts.
D. {: Curly braces are not used for field selection in Prism Analytics; they may be used in other systems for different purposes, such as templating.
The use of the [ symbol ensures an efficient and accurate way to reference fields in a calculated field expression, streamlining the creation process in Prism Analytics.
For a Prism use case, you have two datasets: one contains daily sales data, and the other contains monthly budget allocations. Before performing a join between these datasets, what transformation stage should you apply to the sales data to ensure it matches the granularity of the budget data?
Options:
Union
Group By
Manage Fields
Filter
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, joining datasets with different levels of granularity requires aligning their granularity to ensure a meaningful match. The sales data is at a daily level (one row per day), while the budget data is at a monthly level (one row per month). According to the official Workday Prism Analytics study path documents, to match the granularity of the monthly budget data, you should apply a Group By stage to the sales data (option B). The Group By stage aggregates the daily sales data into monthly totals (e.g., summing sales amounts by month), reducing the granularity from daily to monthly. This allows the sales data to be joined with the monthly budget data on a common key, such as the month.
For example, a Group By stage could group the sales data by a derived month field (e.g., using a function like EXTRACT(YEAR_MONTH, sale_date)) and aggregate the sales amounts using a function like SUM(sales_amount). The resulting dataset would have one row per month, matching the budget data’s granularity.
The other options are incorrect:
A. Union: A Union stage appends rows from one dataset to another but does not change granularity; it cannot aggregate daily data into monthly data.
C. Manage Fields: The Manage Fields stage modifies field properties (e.g., type, name) but does not aggregate data to change granularity.
D. Filter: A Filter stage removes rows based on conditions but does not aggregate data to align granularity levels.
The Group By stage is the appropriate transformation to align the sales data’s granularity with the monthly budget data for a successful join.
You created a derived dataset that imports data from a table, which will become your Stage 1. What can you add to this dataset?
Options:
As many transformation stages of any type as your scenario requires.
As many transformation stages of any type as long as they are in a particular order.
Up to five transformation stages.
Up to two Manage Fields transformation stages.
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a derived dataset (DDS) allows users to transform data by adding various transformation stages after the initial import stage (Stage 1). According to the official Workday Prism Analytics study path documents, you can add as many transformation stages of any type as your scenario requires (option A). Prism Analytics supports a variety of transformation stages, such as Join, Union, Filter, Manage Fields, and Calculate Field, among others. There are no strict limits on the number of stages or their types, and they can be added in any order that makes sense for the data transformation logic, as long as the stages are configured correctly to produce the desired output. This flexibility allows users to build complex transformation pipelines tailored to their specific use case.
The other options are incorrect:
B. As many transformation stages of any type as long as they are in a particular order: While the order of stages matters for the transformation logic (e.g., a Filter before a Join), there is no predefined order requirement for all stages; the order depends on the scenario.
C. Up to five transformation stages: There is no limit of five transformation stages in Prism Analytics; you can add more as needed.
D. Up to two Manage Fields transformation stages: There is no restriction to only two Manage Fields stages; you can add as many as required.
The ability to add as many transformation stages as needed provides maximum flexibility in shaping the data within a derived dataset.
What is the primary purpose of window functions in Prism?
Options:
To provide row-level access control.
To manipulate strings and dates within a query.
To filter rows based on specified conditions.
To perform calculations across a set of rows related to the current row while partitioning the data.
Answer:
DExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Window functions in Workday Prism Analytics are a powerful feature used in dataset transformations to perform advanced calculations. According to the official Workday Prism Analytics study path documents, the primary purpose of window functions is to perform calculations across a set of rows related to the current row while partitioning the data. These functions allow users to compute values such as running totals, rankings, or aggregations (e.g., SUM, COUNT, RANK) within a defined “window” of rows, which can be partitioned by specific columns and ordered as needed. Window functions operate withoutcollapsing the dataset (unlike group-by aggregations), preserving the original row structure while adding calculated results.
The other options do not describe the purpose of window functions:
A. To provide row-level access control: Row-level access control is managed through security domains and policies, not window functions.
B. To manipulate strings and dates within a query: String and date manipulations are handled by other functions (e.g., CONCAT, DATEADD), not window functions.
C. To filter rows based on specified conditions: Filtering is achieved using WHERE clauses or filter stages, not window functions.
Window functions are essential for complex analytical calculations, such as ranking employees within a department or calculating cumulative totals, making them a key tool in Prism’s data transformation capabilities.
You want to apply a Filter stage to your derived dataset to show only expense reports submitted in the current month and where the expense report total amount is higher than 2000 USD. What should you do?
Options:
Use a simple filter, two conditions, and "If All" operator.
Use a simple filter, two conditions, and "If Any" operator.
Use a simple filter, three conditions, and "If All" operator.
Use a simple filter, three conditions, and "If Any" operator.
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a Filter stage in a derived dataset is used to include only rows that meet specific criteria. The requirement here is to show expense reports that satisfy two conditions: (1) submitted in the current month, and (2) total amount higher than 2000 USD. According to the official Workday Prism Analytics study path documents, this can be achieved by using a simple filter with two conditions and the "If All" operator (option A).
The first condition would check the submission date, using a function like MONTH() to compare with the current month (e.g., MONTH(submission_date) = MONTH(CURRENT_DATE())). The second condition would compare the total amount (e.g., total_amount > 2000). The "If All" operator ensures that both conditions must be true for a row to be included, which aligns with the requirement that both criteria (current month AND amount > 2000 USD) must be met. A simple filter is sufficient because the logic involves straightforward comparisons without nested conditions.
The other options are incorrect:
B. Use a simple filter, two conditions, and "If Any" operator: The "If Any" operator would include rows where either condition is true (e.g., submitted in the current month OR amount > 2000 USD), which does not meet the requirement for both conditions to be true.
C. Use a simple filter, three conditions, and "If All" operator: Only two conditions are needed (submission month and amount), so three conditions are unnecessary.
D. Use a simple filter, three conditions, and "If Any" operator: This combines the issues of option B (wrong operator) and option C (too many conditions).
Using a simple filter with two conditions and the "If All" operator ensures the dataset includes only the expense reports that meet both criteria.
You accidentally delete a Prism calculated field that is used in other Prism calculated fields or conditions. What is a possible outcome?
Options:
The system will automatically reverse the deletion because the field is referenced elsewhere.
Any calculated field referencing the deleted field defaults to zero.
Errors will result in any stage or calculated field that references the field.
The system will automatically adjust any dependencies accordingly.
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, calculated fields are often interdependent, with one calculated field referencing another in its expression or being used in conditions within a dataset’s transformation stages. According to the official Workday Prism Analytics study path documents, if a calculated field is deleted while other calculated fields or conditions depend on it, the system does not automatically handle the dependency. Instead, this deletion will cause errors in any stage or calculated field that references the deleted field. These errors occur because the dependent calculations or conditions can no longer resolve the reference to the deleted field, leading to failures in the dataset’s transformation pipeline or when the dataset is processed or published.
The other options are incorrect:
A. The system will automatically reverse the deletion because the field is referenced elsewhere: Prism Analytics does not have an automatic reversal mechanism for deletions; users must manually restore the field if needed.
B. Any calculated field referencing the deleted field defaults to zero: The system does not default to zero; it will instead throw an error due to the unresolved reference.
D. The system will automatically adjust any dependencies accordingly: Prism does not automatically adjust dependencies; the user must manually update the dependent fields or conditions to resolve the issue.
The resulting errors highlight the importance of carefully managing dependencies when deleting calculated fields, ensuring that all references are updated or removed to avoid disruptions in the dataset’s transformation logic.
Why should you include Workday instance field types in the Workday report that you use to import data into Prism?
Options:
The final Prism datasource can support drilling into Workday objects.
Performance is improved in the final Prism datasource when published.
Unions are more easily performed with instance field types.
Joins are more easily performed with instance field types.
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
When importing data into Workday Prism Analytics from a Workday report, including Workday instance field types in the report is critical for enabling specific functionality in the resulting Prism data source. According to the official Workday Prism Analytics study path documents, including instance field types allows the final Prism data source to support drilling into Workday objects. Instance field types represent references to Workday business objects (e.g., Worker, Position, or Organization), and including them in the report ensures that the Prism data source retains the ability to navigate to these objects within Workday’s reporting and analytics framework. This enables users to perform drill-down actions, such as accessing detailed object data directly from Prism visualizations or reports.
The other options do not accurately reflect the primary benefit of including instance field types:
B. Performance is improved in the final Prism datasource when published: Instance field types do not directly impact the performance of the published data source; performance is more influenced by data volume and indexing.
C. Unions are more easily performed with instance field types: Unions depend on schema compatibility, not instance field types, which are specific to Workday object references.
D. Joins are more easily performed with instance field types: While instance field types can be used in joins, their primary purpose is to enable object navigation, not to simplify join operations.
By including instance field types, the Prism data source gains enhanced interactivity, allowing users to leverage Workday’s object model for deeper analysis and navigation.
Using three different source files, you want to load rows of data into an empty table through a Data Change task. What needs to be the same about the three source files?
Options:
Schema
Source
Naming convention
Size
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a Data Change task is used to load or update data into a table, which can involve importing data from multiple source files. According to the official Workday Prism Analytics study path documents, when loading rows from multiple source files into an empty table, the source files must share the same schema. The schema defines the structure of the data, including the column names, data types, and their order, which ensures that the data from all source files can be consistently mapped and loaded into the target table without errors.
The schema is critical because the Data Change task relies on a predefined table structure to process the incoming data. If the schemas of the source files differ (e.g., different column names or data types), the task will fail due to inconsistencies in data mapping. The other options are not required to be the same:
Source: The source files can originate from different systems or locations (e.g., Workday, external systems, or file uploads) as long as the schema aligns.
Naming convention: The names of the source files do not need to follow a specific convention for the Data Change task to process them.
Size: The size of the source files (e.g., number of rows or file size) can vary, as the task processes the data based on the schema, not the volume.
Thus, the requirement for the source files to have the same schema ensures seamless data loading into the table, maintaining data integrity and consistency during the transformation process.
You want to create a Prism calculated field to change the field type to date data using the TO_DATE function. The field from Workday is numeric data and you will use the Manage Fields stage to prepare the data for use in the function. What will you need to change about the field in the Manage Fields stage?
Options:
Output Type
Output Name
Input Type
Input Name
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, the TO_DATE function in a calculated field is used to convert a string or compatible data type into a date. However, in this scenario, the field from Workday is numeric, and the TO_DATE function typically requires a string input (e.g., a numeric value like 20230101 needs to be converted to a string like "20230101" before applying TO_DATE). According to the official Workday Prism Analytics study path documents, to prepare the numeric field for use with the TO_DATE function, you must first use a Manage Fields stage to change the field’s Output Type to Text. The Manage Fields stage allows you to modify the field’s properties, and changing the Output Type from Numeric to Text converts the numeric values into a string format that the TO_DATE function can then process (e.g., TO_DATE([Field_Name], "YYYYMMDD")).
The other options are not relevant:
B. Output Name: Changing the Output Name renames the field but does not address the field type compatibility required for the TO_DATE function.
C. Input Type: The Manage Fields stage does not modify an "Input Type"; it adjusts the Output Type to transform the field as it moves through the pipeline.
D. Input Name: There is no "Input Name" property in the Manage Fields stage; this option is not applicable.
By changing the Output Type to Text in the Manage Fields stage, the numeric field is converted to a string, making it compatible with the TO_DATE function for creating a date field in the calculated field.
You have published a derived dataset to build a Prism data source. For reports using this Prism data source, when is data updated?
Options:
At republish of the datasource only.
At reimport into tables and republish of the datasource.
At reimport into tables only.
At report runtime.
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a published Prism data source (PDS) contains a snapshot of data from a derived dataset at the time of publishing. According to the official Workday Prism Analytics study path documents, for reports using a Prism data source, the data is updated at reimport into tables and republish of the datasource (option B). A derived dataset typically sources data from underlying tables (via import stages), and any updates to the source data require two steps: (1) reimporting the updated data into the tables (e.g., via a Data Change task), and (2) republishing the derived dataset to refresh the Prism data source with the new data. Reports using the PDS will reflect the updated data only after both steps are completed, as the data source is a static snapshot until republished.
The other options are incorrect:
A. At republish of the datasource only: Republishing alone does not update the data if the underlying tables have not been reimported with new data; both steps are necessary.
C. At reimport into tables only: Reimporting into tables updates the source data, but the PDS remains unchanged until the dataset is republished.
D. At report runtime: Reports do not dynamically update the PDS at runtime; they use the data as it exists in the PDS at the time of the last publish.
The combination of reimporting into tables and republishing the data source ensures that reports reflect the most current data.
A Prism data writer has to create an intermediary Prism calculated field A, used only to achieve a final result in Prism calculated field B and they only need to publish out field B. What should they do?
Options:
Mark field A as intermediate calculation.
Add a Manage Fields stage to the DDS and hide field A.
Add a Manage Fields stage to the DDS and hide field B.
Delete field A from their DDS and just leave field B.
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, when a data writer creates an intermediary calculated field (e.g., field A) solely to derive a final calculated field (e.g., field B) in a Derived Dataset (DDS), they may want to exclude the intermediary field from the published output to keep the dataset clean and focused. According to the official Workday Prism Analytics study path documents, the recommended approach is to add a Manage Fields stage to the DDS and hide field A. The Manage Fields stage allows users to control the visibility of fields in the dataset, enabling them to hide fields that are not needed in the final output while retaining their calculations for internal use within the dataset’s transformation logic. By hiding field A, field B can still leverage field A’s calculations, and only field B will be visible in the published dataset or data source.
The other options are not suitable:
A. Mark field A as intermediate calculation: There is no specific feature in Prism Analytics to “mark” a field as an intermediate calculation; this is not a supported action.
C. Add a Manage Fields stage to the DDS and hide field B: Hiding field B would defeat the purpose, as field B is the intended output to be published.
D. Delete field A from their DDS and just leave field B: Deleting field A would break the calculation of field B, as field B depends on field A, making this option infeasible.
Using the Manage Fields stage to hide field A ensures that the dataset remains functional while presenting only the necessary fields in the final output, aligning with best practices for data transformation and publishing.