Note
This project is under active development. Many features are not complete. We would love for you to Get Involved!// TBD.
This is the multi-page printable view of this section. Click here to print.
// TBD.
You can start a mock server of container registry with below command:
atest mock --prefix / mock/image-registry.yaml
then, you can pull images from it:
docker pull localhost:6060/repo/name:tag
// TBD
This document will introduce how to write testsuite for the gRPC API of api-testing
.
Before reading this document, you need to install and configure api-testing
. For specific operations, please refer to Install Document. If you have completed these steps, you can continue reading the rest of this document.
To create a gRPC testsuite based on service reflection, just add the following content to the spec
path of the yaml file:
spec:
rpc:
serverReflection: true
Field rpc
has five subfields in total:
Name | Type | Optional |
---|---|---|
import | []string | √ |
protofile | string | √ |
protoset | string | √ |
serverReflection | bool | √ |
import
and protofile
protofile
is a file path pointing to the location of the .proto
file where api-testing
looks for descriptors.
The import
field is similar to the --import_path
parameter in the protc
compiler, and is used to determine the location of the proto
file and the directory for parsing dependencies. Like protoc
, you don’t need to specify the location of certain proto
files here (such as Protocol Buffers Well-Known Types
starting with google.protobuf
), they are already built into api-testing
binary file.
protoset
Field protoset
can be either a file path or a network address starting with http(s)://
.
When you have a large number of proto
or complex dependencies, you can try to use protoc --descriptor_set_out=set.pb
to generate a proto descriptor set
. Essentially it is a wire-encoded binary file that includes all the required descriptors.
serverReflection
If the target server supports service reflection, setting this to true
will no longer need to provide the above three fields.
Note: The priority order of api-testing
for the three descriptor sources is
serverReflection
> protoset
> protofile
Like writing the HTTP
testsuite, you need to define the address of the server in the api
field of the root node.
api: 127.0.0.1:7070
By default, api-testing
uses an insecure way to connect to the target server. If you want to configure a TLS certificate, please refer to the document About Security
Writing testsuite for gRPC API
is basically the same as writing testsuite for HTTP API
.
- name: FunctionsQuery
request:
api: /server.Runner/FunctionsQuery
body: |
{
"name": "hello"
}
expect:
body: |
{
"data": [
{
"key": "hello",
"value": "func() string"
}
]
}
The format of the api
field is /package.service/method
, which supports gRPC unary calls, client streams, server streams and bidirectional stream calls.
The body
field at the same level as the api
field is a Protocol Buffers
message expressed in JSON
format, representing the input parameters of the api
to be called. Especially, when you need to call the client stream or bidirectional stream API, please use the JSON Array
format to write the field body
, such as:
body: |
[
{
"name": "hello"
},
{
"name": "title"
}
]
Writing gRPC API
to return content validation is basically the same as HTTP API
. For the gRPC API
, all return values are treated as map
types and put into the api testing
specific return structure:
expect:
body: |
{
"data": [
{
"key": "hello",
"value": "func() string"
}
]
}
api-testing
has written a comparison lib for JSON
comparison, please refer to here compare
Please note that for server-side streaming and bi-directional streaming modes where the server sends multiple messages, the target array in the data
field must be the same length as the array to be validated, and both arrays must have the same contents under the same index.
The verify
functionality of the gRPC API
is consistent with the HTTP API
and will not be repeated here.
You can use the following command to do it:
atest run --report prometheus --report-file http://localhost:9091 \
-p sample/testsuite-gitee.yaml --duration 30m --qps 1
It will push the test results data into Prometheus PushGateway. Then Prometheus could get the metrics from it.
Skip the following instructions if you are familiar with Prometheus:
docker run \
-p 9090:9090 \
-v /etc/timezone:/etc/timezone:ro \
-v /etc/localtime:/etc/localtime:ro \
-v /root/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
docker run -p 9091:9091 \
-v /etc/timezone:/etc/timezone:ro \
-v /etc/localtime:/etc/localtime:ro \
prom/pushgateway
docker run -p 3000:3000 docker.io/grafana/grafana
Usually, when TLS certificate authentication is not used, the gRPC client and server communicate in plain text, and the information is easily eavesdropped or tampered by a third party. Therefore, it is recommended to use SSL/TLS to protect gRPC services in most cases. Currently, atest
has implemented server-side TLS, and mutual TLS (mTLS) needs to wait for implementation.
By default atest
does not use any security policy, which is equivalent to spec.secure.insecure = true
. Enabling TLS only requires adding the following content to your yaml:
spec:
secure:
cert: server.pem
serverName: atest
secure
has the following five fields:
Name | Type | Optional |
---|---|---|
cert | string | x |
ca | string | √ |
key | string | √ |
serverName | string | x |
insecure | bool | √ |
cert
is the path to the certificate that the client needs to configure, in the format of PEM
.
serverName
is the service name required by TLS, usually the x509 SAN used when issuing certificates.
ca
is the path to the CA certificate, and key
is the private key corresponding to cert
. After filling in these two items, mTLS is enabled. (mTLS is not implemented yet)
When insecure
is false
, cert
and serverName
are required.
// TBD
Install API Testing.
// TBD.
Date: 2024 06 01
// TBD