Releases: linkerd/linkerd2
v18.7.3
Linkerd2 v18.7.3 completes the rebranding from Conduit to Linkerd2, and improves
overall performance and stability.
- Proxy
- Improved CPU utilization by ~20%
- Web UI
- Experimental
/tap
page now supports additional filters
- Experimental
- Control Plane
- Updated all k8s.io dependencies to 1.11.1
v18.7.2
Linkerd2 v18.7.2 introduces new stability features as we work toward production readiness.
You can easily install this release (and others!). Simply:
curl https://run.conduit.io/install\?v18.7.2 | sh
linkerd install | kubectl apply -f -
linkerd dashboard
Release notes:
- Control Plane
- Breaking change Injected pod labels have been renamed to be more consistent with Kubernetes; previously injected pods must be re-injected with new version of linkerd CLI in order to work with updated control plane
- The "ca-bundle-distributor" deployment has been renamed to "ca"
- Proxy
- Fixed HTTP/1.1 connections were not properly reused, leading to elevated latencies and CPU load
- Fixed The
process_cpu_seconds_total
was calculated incorrectly
- Web UI
- New per-namespace application topology graph
- Experimental web-based Tap interface accessible at
/tap
- Updated favicon to the Linkerd logo
v18.7.1
Linkerd2 v18.7.1 is the first release of Linkerd2, which was formerly hosted at https://github.com/runconduit/conduit.
This is a beta release. It is the first of many as we work towards a GA release. See the blog post for more details on where this is all going.
The artifacts here are the CLI binaries. To install Linkerd2 on your Kubernetes cluster, download the appropriate binary, rename it to linkerd
, and run linkerd install | kubectl apply -f -
.
- Packaging
- Introduce new date-based versioning scheme,
vYY.M.n
- Move all Docker images to
gcr.io/linkerd-io
repo
- Introduce new date-based versioning scheme,
- User Interface
- Update branding to reference Linkerd throughout
- The CLI is now called
linkerd
- Production Readiness
- Fix issue with Destination service sending back incomplete pod metadata
- Fix high CPU usage during proxy shutdown
- ClusterRoles are now unique per Linkerd install, allowing multiple instances to be installed in the same Kubernetes cluster
v0.5.0
Conduit v0.5.0 introduces a new, experimental feature that automatically
enables Transport Layer Security between Conduit proxies to secure
application traffic. It also adds support for HTTP protocol upgrades, so
applications that use WebSockets can now benefit from Conduit.
- Security
- New
conduit install --tls=optional
enables automatic, opportunistic
TLS. See the docs for more info.
- New
- Production Readiness
- The proxy now transparently supports HTTP protocol upgrades to support, for
instance, WebSockets. - The proxy now seamlessly forwards HTTP
CONNECT
streams. - Controller services are now configured with liveness and readiness probes.
- The proxy now transparently supports HTTP protocol upgrades to support, for
- User Interface
conduit stat
now supports a virtualauthority
resource that aggregates
traffic by the:authority
(orHost
) header of an HTTP request.dashboard
,stat
, andtap
have been updated to describe TLS state for
traffic.conduit tap
now has more detailed information, including the direction of
each message (outbound or inbound).conduit stat
now more-accurately records histograms for low-latency services.conduit dashboard
now includes error messages when a Conduit-enabled pod fails.
- Internals
- Prometheus has been upgraded to v2.3.1.
- A potential live-lock has been fixed in HTTP/2 servers.
conduit tap
could crash due to a null-pointer access. This has been fixed.
v0.4.4
v0.4.4
Conduit v0.4.4 continues to improve production suitability and sets up internals for the
upcoming v0.5.0 release.
- Production Readiness
- The destination service has been mostly-rewritten to improve safety and correctness,
especially during controller initialization. - Readiness and Liveness checks have been added for some controller components.
- RBAC settings have been expanded so that Prometheus can access node-level metrics.
- The destination service has been mostly-rewritten to improve safety and correctness,
- User Interface
- Ad blockers like uBlock prevented the Conduit dashboard from fetching API data. This
has been fixed. - The UI now highlights pods that have failed to start a proxy.
- Ad blockers like uBlock prevented the Conduit dashboard from fetching API data. This
- Internals
- Various dependency upgrades, including Rust 1.26.2.
- TLS testing continues to bear fruit, precipitating stability improvements to
dependencies like Rustls.
Special thanks to @alenkacz for improving docker build times!
v0.4.3
v0.4.3
Conduit v0.4.3 continues progress towards production readiness. It features a new
latency-aware load balancer.
- Production Readiness
- The proxy now uses a latency-aware load balancer for outbound requests. This
implementation is based on Finagle's Peak-EWMA balancer, which has been proven to
significantly reduce tail latencies. This is the same load balancing strategy used by
Linkerd.
- The proxy now uses a latency-aware load balancer for outbound requests. This
- User Interface
conduit stat
is now slightly more predictable in the way it outputs things,
especially for commands likewatch conduit stat all --all-namespaces
.- Failed and completed pods are no longer shown in stat summary results.
- Internals
- The proxy now supports some TLS configuration, though these features remain disabled
and undocumented pending further testing and instrumentation.
- The proxy now supports some TLS configuration, though these features remain disabled
Special thanks to @ihcsim for contributing his first PR to the project and to @roanta for
discussing the Peak-EWMA load balancing algorithm with us.
v0.4.2
v0.4.2
Conduit 0.4.2 is a major step towards production readiness. It features a wide array of
fixes and improvements for long-running proxies, and several new telemetry features. It
also lays the groundwork for upcoming releases that introduce mutual TLS everywhere.
- Production Readiness
- The proxy now drops metrics that do not update for 10 minutes, preventing unbounded
memory growth for long-running processes. - The proxy now constrains the number of services that a node can route to
simultaneously (default: 100). This protects long-running proxies from consuming
unbounded resources by tearing down the longest-idle clients when the capacity is
reached. - The proxy now properly honors HTTP/2 request cancellation.
- The proxy could incorrectly handle requests in the face of some connection errors.
This has been fixed. - The proxy now honors DNS TTLs.
conduit inject
now works withstatefulset
resources.
- The proxy now drops metrics that do not update for 10 minutes, preventing unbounded
- Telemetry
- New
conduit stat
now supports theall
Kubernetes resource, which
shows traffic stats for all Kubernetes resources in a namespace. - New the Conduit web UI has been reorganized to provide namespace overviews.
- Fix a bug in Tap that prevented the proxy from simultaneously satisfying more than
one Tap request. - Fix a bug that could prevent stats from being reported for some TCP streams in
failure conditions. - The proxy now measures response latency as time-to-first-byte.
- New
- Internals
- The proxy now supports user-friendly time values (e.g.
10s
) from environment
configuration. - The control plane now uses client for Kubernetes 1.10.2.
- Much richer proxy debug logging, including socket and stream metadata.
- The proxy internals have been changed substantially in preparation for TLS support.
- The proxy now supports user-friendly time values (e.g.
Special thanks to @carllhw, @kichristensen, & @sfroment for contributing to this release!
Upgrading from v0.4.1
When upgrading from v0.4.1, we suggest that the control plane be upgraded to v0.4.2 before
injecting application pods to use v0.4.2 proxies.
v0.4.1
Conduit 0.4.1 builds on the telemetry work from 0.4.0, providing rich,
Kubernetes-aware observability and debugging.
- Web UI
- New Automatically-configured Grafana dashboards for Services, Pods,
ReplicationControllers, and Conduit mesh health. - New
conduit dashboard
Pod and ReplicationController views.
- New Automatically-configured Grafana dashboards for Services, Pods,
- Command-line interface
- Breaking change
conduit tap
now operates on most Kubernetes resources. conduit stat
andconduit tap
now both support kubectl-style resource
strings (deploy
,deploy/web
, anddeploy web
), specifically:namespaces
deployments
replicationcontrollers
services
pods
- Breaking change
- Telemetry
- New Tap support for filtering by and exporting destination metadata. Now
you can sample requests from A to B, where A and B are any resource or group
of resources. - New TCP-level stats, including connection counts and durations, and
throughput, wired through to Grafana dashboards.
- New Tap support for filtering by and exporting destination metadata. Now
- Service Discovery
- The proxy now uses the trust-dns DNS resolver. This fixes a number of DNS
correctness issues. - The Destination service could sometimes return incorrect, stale, labels for an
endpoint. This has been fixed!
- The proxy now uses the trust-dns DNS resolver. This fixes a number of DNS
v0.4.0
Conduit 0.4.0 overhauls Conduit's telemetry system and improves service discovery
reliability.
- Web UI
- New automatically-configured Grafana dashboards for all Deployments.
- Command-line interface
conduit stat
has been completely rewritten to accept arguments likekubectl get
.
The--to
and--from
filters can be used to filter traffic by destination and
source, respectively.conduit stat
currently can operate onNamespace
and
Deployment
Kubernetes resources. More resource types will be added in the next
release!
- Proxy (data plane)
- New Prometheus-formatted metrics are now exposed on
:4191/metrics
, including
rich destination labeling for outbound HTTP requests. The proxy no longer pushes
metrics to the control plane. - The proxy now handles
SIGINT
orSIGTERM
, gracefully draining requests until all
are complete orSIGQUIT
is received. - SMTP and MySQL (ports 25 and 3306) are now treated as opaque TCP by default. You
should no longer have to specify--skip-outbound-ports
to communicate with such
services. - When the proxy reconnected to the controller, it could continue to send requests to
old endpoints. Now, when the proxy reconnects to the controller, it properly removes
invalid endpoints. - A bug impacting some HTTP/2 reset scenarios has been fixed.
- New Prometheus-formatted metrics are now exposed on
- Service Discovery
- Previously, the proxy failed to resolve some domain names that could be misinterpreted
as a Kubernetes Service name. This has been fixed by extending the Destination API
with a negative acknowledgement response.
- Previously, the proxy failed to resolve some domain names that could be misinterpreted
- Control Plane
- The Telemetry service and associated APIs have been removed.
- Documentation
- Updated Roadmap
- Added prometheus metrics guide
Special thanks to @ahume, @alenkacz, & @xiaods for contributing to this release!
Upgrading from v0.3.1
When upgrading from v0.3.1, it's important to upgrade proxies before upgrading the
controller. As you upgrade proxies, the controller will lose visibility into some data
plane stats. Once all proxies are updated, conduit install |kubectl apply -f -
can be
run to upgrade the controller without causing any data plane disruptions. Once the
controller has been restarted, traffic stats should become available.
v0.3.1
Conduit 0.3.1 improves Conduit's resilience and transparency.
- Proxy (data plane)
- The proxy now makes fewer changes to requests and responses being proxied. In particular,
requests and responses without bodies or with empty bodies are better supported. - HTTP/1 requests with different
Host
header fields are no longer sent on the same HTTP/1
connection even when those hostnames resolve to the same IP address. - A connection leak during proxying of non-HTTP TCP connections was fixed.
- The proxy now handles unavailable services more gracefully by timing out while waiting for an
endpoint to become available for the service.
- The proxy now makes fewer changes to requests and responses being proxied. In particular,
- Command-line interface
$KUBECONFIG
with multiple paths is now supported. (PR #482 by @hypnoglow).conduit check
now checks for the availability of a Conduit update. (PR #460 by @ahume).
- Service Discovery
- Kubernetes services with type
ExternalName
are now supported.
- Kubernetes services with type
- Control Plane
- The proxy is injected into the control plane during installation to improve the control plane's
resilience and to "dogfood" the proxy. - The control plane is now more resilient regarding networking failures.
- The proxy is injected into the control plane during installation to improve the control plane's
- Documentation
- The markdown source for the documentation published at https://conduit.io/docs/ is now open
source at https://github.com/runconduit/conduit/tree/master/doc.
- The markdown source for the documentation published at https://conduit.io/docs/ is now open