The data plane¶
In the previous exploration we looked at issues relating to misconfiguration.
In this exploration, we investigate issues with the data plane: everything is configured correctly, but some traffic flow isn't functioning, and we need to find out why.
Ingress is configured for the bookinfo
application, routing requests to the productpage
destination.
Assuming the local cluster deployed with k3d in setup, the ingress gateway is reachable on localhost, port 80:
No Healthy Upstream (UH)¶
What if for some reason the backing workload is not accessible?
Simulate this situation by scaling the productpage-v1
deployment to zero replicas:
In one terminal, tail the logs of the ingress gateway:
In another terminal, send a request in through the ingress gateway:
In the logs you should see the following line:
"GET /productpage HTTP/1.1" 503 UH no_healthy_upstream - "-" 0 19 0 - "10.42.0.1" "curl/8.7.1" "c4c58af1-2066-4c45-affb-d1345d32fc66" "localhost" "-" outbound|9080||productpage.default.svc.cluster.local - 10.42.0.7:8080 10.42.0.1:60667 - -
Note the UH response flag: No Healthy Upstream.
These response flags clearly communicate to the operator the reason for which the request did not succeed.
No Route Found (NR)¶
As another example, make a request to a route that does not match any routing rules in the virtual service:
The log entry responds with a 404 "NR", for "No Route Found":
"GET /productpages HTTP/1.1" 404 NR route_not_found - "-" 0 0 0 - "10.42.0.1" "curl/8.7.1" "2606aaa9-8c5c-4987-9ba7-86b89f901d34" "localhost" "-" - - 10.42.0.7:8080 10.42.0.1:13819 - -
UpstreamRetryLimitExceeded (URX)¶
Delete the bookinfo
Gateway and VirtualService resources:
In its place, configure ingress for the httpbin
workload:
The VirtualService is configured with three retry attempts in the event of a 503 response.
Call the httpbin
endpoint that returns a 503:
The Envoy gateway logs will show the response flag URX: UpstreamRetryLimitExceeded:
"GET /status/503 HTTP/1.1" 503 URX via_upstream - "-" 0 0 120 119 "10.42.0.1" "curl/8.7.1" "dcb3b100-e296-4031-8f45-1234d20b0f20" "localhost" "10.42.0.9:8080" outbound|8000||httpbin.default.svc.cluster.local 10.42.0.7:38902 10.42.0.7:8080 10.42.0.1:51761 - -
That is, the gateway got a 503, retried the request up to three times, and then gave up.
Envoy's response flags provide insight into why a request to a target destination workload might have failed.
Sidecar logs¶
We are not restricted to inspecting the logs of the ingress gateway. We can also check the logs of the Envoy sidecars.
Tail the logs for the sidecar of the httpbin
destination workload:
Repeat the call to the httpbin
"503" endpoint:
You will see evidence of four inbound requests received by the sidecar, i.e. three retry attempts.
Log levels¶
The log level for any Envoy proxy can be either displayed or configured with the proxy-config log command.
Envoy has many loggers. The log level for each logger can be configured independently.
For example, let us target the Istio ingress gateway deployment.
To view the log levels for each logger, run:
The log levels are: trace
, debug
, info
, warning
, error
, critical
, and off
.
To set the log level for, say the wasm logger, to info
:
This can be useful for debugging wasm extensions.
The output displays the updated logging levels for every logger for that Envoy instance.
Log levels can be reset for all loggers with the --reset
flag: