Spotlight on Stream Processing and Machine Learning

capture-20200225-110240.jpg

Spotlight on Stream Processing and Machine Learning

David Brimley, Financial Services Industry Consultant, Hazelcast, speaks to FinextraTV about what financial services firms are doing with machine learning and what firms should consider as they progress through their machine learning journey. He explains how streaming data fits in financial services, how firms can ease into streaming without going through a complete re-architecture of their system…

Deep Learning For Real Time Streaming Data With Kafka And Tensorflow | YongTang – ODSC East 2019


In mission-critical real time applications, using machine learning to analyze streaming data are gaining momentum. In those applications Apache Kafka is the most widely used framework to process the data streams. It typically works with other machine learning frameworks for model inference and training purposes.

In this talk, our focus is to discuss the KafkaDataset module in TensorFlow. KafkaDataset processes Kafka streaming data directly to TensorFlow’s graph. As a part of Tensorflow (in `tf.contrib`), the implementation of KafkaDataset is mostly written in C++. The module exposes a machine learning friendly Python interface through Tensorflow’s `tf.data` API. It could be directly feed to `tf.keras` and other TensorFlow modules for training and inferencing purposes.

Combined with Kafka streaming itself, the KafkaDataset module in TensorFlow removes the need to have an intermediate data processing infrastructure. This helps many mission-critical real time applications to adopt machine learning more easily. At the end of the video we will walk through a concrete example with a demo to showcase the usage we described.

Do You Like This Video? Share Your Thoughts in Comments Below
Also, You can visit our website and choose the nearest ODSC Event to attend and experience all our Trainings and Workshops
odsc.com

#ODSC #DeepLearning #Tensorflow #Kafka #DataScience #AI

Data Engineering Spotlight: Q&A


During this Q &A, Google Cloud experts answer questions about the Data Engineering Spotlight and all the sessions!

Speakers: Priyanka Vergadia, Grace Yueng

Watch more:
Watch all Data Engineering sessions → https://goo.gle/3yMj6Bk

Subscribe to Google Cloud Events → https://goo.gle/GoogleCloudEvents

#GoogleCloudSpotlight

Real-Time Machine Learning and Smarter AI with Data Streaming


https://cnfl.io/podcast-episode-251 | Are bad customer experiences really just data integration problems? Can real-time data streaming and machine learning be democratized in order to deliver a better customer experience? Airy, an open-source data-streaming platform, uses Apache Kafka® to help business teams deliver better results to their customers. In this episode, Airy CEO and co-founder Steffen Hoellinger explains how his company is expanding the reach of stream-processing tools and ideas beyond the world of programmers.

Airy originally built Conversational AI (chatbot) software and other customer support products for companies to engage with their customers in conversational interfaces. Asynchronous messaging created a large amount of traffic, so the company adopted Kafka to ingest and process all messages & events in real time.

In 2020, the co-founders decided to open source the technology, positioning Airy as an open source app framework for conversational teams at large enterprises to ingest and process conversational and customer data in real time. The decision was rooted in their belief that all bad customer experiences are really data integration problems, especially at large enterprises where data often is siloed and not accessible to machine learning models and human agents in real time.

(Who hasn’t had the experience of entering customer data into an automated system, only to have the same data requested eventually by a human agent?)

Airy is making data streaming universally accessible by supplying its clients with real-time data and offering integrations with standard business software. For engineering teams, Airy can reduce development time and increase the robustness of solutions they build.

Data is now the cornerstone of most successful businesses, and real-time use cases are becoming more and more important. Open-source app frameworks like Airy are poised to drive massive adoption of event streaming over the years to come, across companies of all sizes, and maybe, eventually, down to consumers.

EPISODE LINKS
► Learn how to deploy Airy Open Source – or sign up for an Airy Cloud test instance: https://airy.co/developers
► Google Case Study about Airy & TEDi, a 2,000 store retailer: https://businessmessages.google/success-stories/tedi/
► Become an Expert in Conversational Engineering: https://www.conversationdesigninstitute.com/communications/conversational-engineer
► Supercharging conversational AI with human agent feedback loops: https://www.youtube.com/watch?v=MCXumfwcmXE &feature=youtu.be
► Integrating all Communication and Customer Data with Airy and Confluent: https://blog.airy.co/streaming-data-airy-confluent/
► How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka: https://cnfl.io/build-and-deploy-scalable-ml-in-production-with-kafka-episode-251
► Real-Time Threat Detection Using Machine Learning and Apache Kafka: https://cnfl.io/real-time-threat-detection-using-ml-and-apache-kafka-episode-251
► Kris Jenkins’ Twitter: https://twitter.com/krisajenkins
► Join the Confluent Community: https://cnfl.io/confluent-community-episode-251
► Learn more with Kafka tutorials, resources, and guides at Confluent Developer: https://cnfl.io/confluent-developer-episode-251
► Live demo: Intro to Event-Driven Microservices with Confluent: https://cnfl.io/demo-intro-to-event-driven-microservices-with-confluent-episode-251
► Use PODCAST100 to get an additional $100 of free Confluent Cloud usage: https://cnfl.io/try-cloud-episode-251
► Promo code details: https://cnfl.io/podcast100-details-episode-251

TIMESTAMPS
0:00 – Intro
4:48 – What is Airy?
11:49 – What is Airy’s architecture?
16:19 – How does Airy work?
23:15 – Incorporating data mesh best practices
26:21 – What differentiates Airy from other stream-processing tools?
31:18 – Customer use-cases
33:18 – What stage is Airy in as a company?
36:04 – Getting started with Airy
37:08 – It’s a wrap!

CONNECT
Subscribe: https://youtube.com/c/confluent?sub_confirmation=1
Site: https://confluent.io
GitHub: https://github.com/confluentinc
Facebook: https://facebook.com/confluentinc
Twitter: https://twitter.com/confluentinc
LinkedIn: https://www.linkedin.com/company/confluent
Instagram: https://www.instagram.com/confluent_inc

ABOUT CONFLUENT
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.

#streamprocessing #kafka #apachekafka #microservices #confluent

FinextraTV & Hazelcast: Spotlight on Stream Processing and Machine Learning 2


David Brimley, Financial Services Industry Consultant, Hazelcast, speaks to FinextraTV about what financial services firms are doing with machine learning and what firms should consider as they progress through their machine learning journey. He explains how streaming data fits in financial services, how firms can ease into streaming without going through a complete re-architecture of their systems and how financial services technologists need to keep an eye on developments in In-memory computing, Cloud and Containerization.

For all your fintech-related news, please visit https://www.finextra.com.