Spring Sale Discount Flat 70% Offer - Ends in 0d 00h 00m 00s - Coupon code: 70diswrap

iSQI CT-GenAI Dumps

Page: 1 / 4
Total 40 questions

ISTQB Certified Tester Testing with Generative AI (CT-GenAI) v1.0 Questions and Answers

Question 1

Which statement BEST differentiates an LLM-powered test infrastructure from a traditional chatbot system used in testing?

Options:

A.

It dynamically generates test insights using contextual information

B.

It produces scripted conversational responses similar to traditional bots

C.

It focuses primarily on visual dashboards and user navigation features

D.

It provides fixed responses from predefined rule sets and scripts

Question 2

What is a hallucination in LLM outputs?

Options:

A.

A transient network failure during inference

B.

A logical mistake in multi-step deduction

C.

Generation of factually incorrect content for the task

D.

A systematic preference learned from data

Question 3

Consider applying the meta-prompting technique to generate automated test scripts for API testing. You need to test a REST API endpoint that processes user registration with validation rules. Which one of the following prompts is BEST suited to this task?

Options:

A.

Role: Act as a test automation engineer with API testing experience. | Context: You are verifying user registration that enforces field and format validation. | Instruction: Generate pytest scripts using requests for both positive (valid) and negative (invalid email, weak password, missing fields) cases. | Input Data: POST /api/register with validation rules for email and password length. | Constraints: Include fixtures, clear assertions, a

B.

Role: Act as a test automation engineer. | Context: You are creating tests for a registration endpoint. | Instruction: Generate Python test scripts using pytest covering both valid and invalid inputs. | Input Data: POST /api/register with email and password. | Constraints: Follow pytest structure. | Output Format: Provide scripts.

C.

Role: Act as an automation tester. | Context: You are validating an API endpoint. | Instruction: Generate Python test scripts that send POST requests and validate responses. | Input Data: User credentials. | Constraints: Include basic scenarios with asserts. | Output Format: Provide organized scripts.

D.

Role: Act as a software engineer. | Context: You are testing registration logic. | Instruction: Create Python scripts to verify endpoint behavior. | Input Data: POST /api/register with test users. | Constraints: Add checks for status codes. | Output Format: Deliver functional scripts.

Question 4

An attacker sends extremely long prompts to overflow context so the model leaks snippets from its training data. Which attack vector is this?

Options:

A.

Data poisoning

B.

Malicious code generation

C.

Data exfiltration

D.

Request manipulation

Question 5

Which statement BEST contrasts interaction style and scope?

Options:

A.

Chatbots enable conversational interactions; LLM apps provide capabilities for defined test tasks.

B.

Chatbots enforce fixed workflows; LLM apps support free-form exploration beneficial for software testing

C.

Chatbots require API integration; LLM apps do not.

D.

Both are identical aside from UI theme.

Question 6

Which setting can reduce variability by narrowing the sampling distribution during inference?

Options:

A.

Increasing temperature

B.

Increasing learning rate

C.

Lowering temperature

D.

Using a larger context window

Question 7

Who typically defines the system prompt in a testing workflow?

Options:

A.

A tester configuring the assistant

B.

End user during normal chat use

C.

CI server automatically without human input

D.

Product owner in user stories only

Question 8

You must generate test cases for a new payments rule. The system includes API specifications stored in a vector database and prior tests in a relational database. Which of the following sequences BEST represents the correct order for applying a Retrieval-Augmented Generation (RAG) workflow?

i. Retrieve semantically similar specification chunks from the vector database

ii. Feed both retrieved datasets as context for the LLM to generate new test cases

iii. Retrieve relevant historical cases from the relational database

iv. Submit a focused query describing the new test requirement

Options:

A.

iv —> iii —> i —> ii

B.

iv —> i —> iii —> ii

C.

iii —> iv —> i —> ii

D.

i —> iv —> iii —> ii

Question 9

In the context of software testing, which statements (i—v) about foundation, instruction-tuned, and reasoning LLMs are CORRECT?

i. Foundation LLMs are best suited for broad exploratory ideation when test requirements are underspecified.

ii. Instruction-tuned LLMs are strongest at adhering to fixed test case formats (e.g., Gherkin) from clear prompts.

iii. Reasoning LLMs are strongest at multi-step root-cause analysis across logs, defects, and requirements.

iv. Foundation LLMs are optimal for strict policy compliance and template conformance.

v. Instruction-tuned LLMs can follow stepwise reasoning without any additional training or prompting.

Options:

A.

i, ii, iii

B.

i, iii, v

C.

i, ii, iii (Duplicate entry in original source)

D.

ii, iii, iv

Question 10

How do tester responsibilities MOSTLY evolve when integrating GenAI into test processes?

Options:

A.

Replacing existing test coverage validation with automated summary reports generated by AI

B.

Transitioning from manual execution to complete automation with no human oversight

C.

Moving from black-box exploratory testing toward exclusively performing code-based white-box checks

D.

Shifting from test execution toward reviewing, refining, and validating AI-generated testware

Question 11

Which option BEST differentiates the three prompting techniques?

Options:

A.

Few-shot = no examples; Chaining = single prompt; Meta = disable iteration

B.

Meta = step decomposition; Chaining = zero-shot only; Few-shot = manual optimization

C.

Chaining = give examples; Few-shot = break tasks; Meta = manual edits only

D.

Few-shot = examples; Chaining = multi-step prompts; Meta = model helps draft/refine prompts

Question 12

The model flags anomalies in logs and also proposes partitions for input validation tests. Which metrics BEST evaluate these two outcomes together?

Options:

A.

Precision for anomaly identification and recall for coverage of valid/invalid partitions

B.

Time efficiency for anomaly detection and accuracy for coverage of valid/invalid partitions

C.

Diversity for anomaly identification and precision for partitions

D.

Accuracy for anomaly detection and Precision for coverage of valid/invalid partitions

Page: 1 / 4
Total 40 questions