Welcome
Designed for existing Kafka developers, learn how to quickly connect your existing Kafka applications to the ONE StreamNative Platform, powered by the Ursa Engine. The tutorial is fully hands-on and self-paced, taking approximately three hours to complete.
- Convert an existing client.properties file to use Pulsar JWT Tokens or StreamNative API-Keys to connect to the StreamNative Ursa Engine.
- Convert an existing client.properties file to use OAuth2.
- Connect to the Kafka protocol handler for producing and consuming messages and register schemas using the Kafka schema registry.
- In addition to using a client.properties file, code samples are also provided for configuring cluster connections directly in your Java code (JWT Token and OAuth2, for both protocol handler and schema registry).
- Code samples tested during the tutorial include producing and consuming messages, transactions, and KStreams. Further information for topic partitioning, compaction, and message retention is also provided.
- In addition, you will have an opportunity to try Ursa’s support for multi-tenancy and enable geo-replication to a second StreamNative Hosted Pulsar cluster by executing just one command.
Tutorial Options
There are two options for consuming the content in this course:
Option 1: Request access to a tenant in the StreamNative Academy training cluster by emailing training-help@streamnative.io. We typically provide access in one business day. You will be provided an OAuth2 Key and JWT Token or API Key, with all cluster permissions preconfigured for you. Access to the cluster is provided for one week. We request that you do not use this option to test your own Kafka code.
Option 2: Spin up your own StreamNative Hosted Pulsar cluster by visiting streamnative.io. A $200 credit is available for testing. You can then test your own Kafka code against the cluster, following the tutorial directions on how to edit your existing Kafka code to connect using a JWT Token or OAuth2 Key. We have included directions in the tutorial for creating the cluster, setting up a service account, tenants and namespaces, and required permissions in your cluster. Sample code is available for download in the tutorial.
Certificate of Completion
Upon completion of the course you will obtain a certificate of completion and badge to post on LinkedIn. You must meet the following requirements to obtain your certificate and badge:
- You used Option 1 above to complete the course. We use our hosted cluster to verify you have completed all required exercises.
- Complete at least four of the following exercises (80% completion for certificate):
- Produced and consumed messages to transactions topic (JWT Token exercise)
- Produced and consumed messages to schema topic (OAuth exercise with JWT Token or OAuth for schema registry)
- Converted at least one of the two applications (transactions or schema) to use multi-tenancy
- Produced and consumed messages to shared topic with Kafka and Pulsar Producers and Consumers
- Enabled geo-replication to a second StreamNative Hosted Pulsar cluster
Requirements
Beginner experience using Java (or similar programming language) is recommended. While the tutorial is designed for existing Kafka developers, no prior experience using Kafka or Pulsar is required.
The sample code uses Java 17 and Maven. The following guides may be useful in setting up your development environment.
Contact training at training-help@streamnative.io with any questions.
