We recently hosted a webinar on Einstein Platform Services in our AMA (Ask Me Anything) format. The session featured  Zineb Laraki (Product Manager), Michael Machado (Director of Product Management), and Rene Winkelmeyer (Senior Developer Evangelist), and was hosted by Arabella David (Director, Developer Marketing). The team answered questions you submitted and discussed everything from best practices for collecting data, to differences in models.

Read on for a transcript of the Q&A from the webinar. You can also watch and listen to the recording of the webinar below.

Below the transcript, check out a few additional questions that we didn’t have to answer during the webinar, plus other resources to help you learn more.

What the heck is Einstein?

MICHAEL: Einstein was a big announcement to Dreamforce and we’ve seen a ton of momentum over the past year-plus since our launch. You can think of Einstein really as a way that Salesforce is embedding intelligence into every cloud and application that our Salesforce customers use. With Einstein.ai, Einstein Vision, and Einstein Language, we’re giving you the ability to understand unstructured data and embed that technology into your own applications through our APIs. Einstein Language and Einstein Vision are APIs you can sign up for on Einstein.ai and begin training your own models or leveraging our pre-trained models to start developing new applications or application extensions of Salesforce that embed this deep learning technology and ability to understand unstructured data (text or images).

Einstein Sentiment gives you that ability to read a document and categorize it in a way that’s positive, negative, or neutral. Einstein Intent takes that one step further and allows you to do customization across your output. Einstein Intent allows you to define those labels.

In Einstein Vision, we have Einstein Object Detection, which gives you a little more granularity than Image Classification. If Image Classification allows you to send an image in and output in what product it is and what the scene and main subject of the image is, Einstein Object Detection allows you to actually have multiple objects recognized, understand their size relative to other objects in the photo, where they are in the location of the photo, and then classify or categorize them by skew or the actual image you’re looking at. So two different technologies are available in the Einstein Vision and Einstein Language families and we’ll have more technology delivered through these platform services.

What’s better for signing up? There’s two options for signing up for Einstein: Is it Salesforce platform or Heroku?

ZINEB: You can go to Einstein.ai and you can authenticate through Salesforce or Heroku. There’s no difference in terms of functionality and for what you get whether you sign up through Salesforce or Heroku. It really depends on your use case. If you’re going to be integrating Einstein with a Salesforce workflow, it makes more sense to sign up using your Salesforce account. If you’re going to be developing an app on Heroku and integrating these services with the application, then it makes more sense to use Heroku. In terms of purchasing, if you’re signing up through Salesforce, you’ll ask your Salesforce AE. In Heroku, you can purchase through the add-on market or your AE, but in terms of functionality, there is really no difference.

What’s the best case you’ve seen so far on Language?

MICHAEL: We’ve seen a tremendous amount of adoption for Einstein Language and really getting excited about some of the use cases. I really think artificial intelligence in general is best deployed in a way that feels really natural to the end user and automates a workflow, so you’re really not changing a lot but actually improving your operations or the user experience. With that, I think there’s three that really pop out to me.

One: Someone typically has unstructured data that comes in through a survey – a customer tells us what their experience was like or tells us ways to improve our service. Einstein Sentiment hopefully helps understand if it’s a positive or negative review. Then with Einstein Intent, make sure it’s routed to the right person who would want to understand that feedback. I think survey analysis fits very much into CRM-type use cases for marketing and is a great use case.

For service, a lot of customers have been using this to automate work order generation, so understanding the service case and being able to route it to the best person to handle that issue based on historical performance.

Finally, looking at unstructured data in Salesforce—notes, comments, activity, data where you have unstructured fields and you’re able to try to guide the user through the natural process. What are you trying to find out or what are you trying to discover in these notes? How can we make sure we extract that information out, label it accordingly and make sure the right person is notified? So if there was something you discovered during a customer meeting, maybe your manager wants to know or maybe there’s another department that would be able to handle what a customer was talking about through any unstructured comment field.

What’s the best use cases we’ve seen so far for Vision?

ZINEB: We’ll talk about the top three use cases that we’ve seen customers gravitate towards. The first one is visual search, so really expanding the way people find or discover products online. You’ve probably heard of Einstein Vision for Social Studio as being one example of how that can be leveraged in that use case.

Another main one that we’re seeing customers look into is product identification, so identifying products in images to streamline sales or services. Another example of a cloud that’s integrated Einstein Vision for product identification is field service, so they’ve integrated Einstein Vision to be able to have their reps be out in the field and take pictures of parts and have those automatically identified and use that for reordering or if there are defective parts etc.

Another one that’s been getting a lot of interest from customers is the CPG use case for object detection, so consumer goods companies using object detection to do on-shelf availability or share-shelf for planogram compliance. Once again, the use case is being able to leverage the image to understand what’s going on and streamline the workflows or increase sales.

What are the neural net architectures used for platform services?

MICHAEL: First, I’ll give a high-level overview of what deep learning is. It’s an overhyped term. It’s something you hear a lot and you don’t really understand a lot of the nuances about it, but really what it is is a description of leveraging a deep learning model which is a neural network of architecture of multiple layers.

We use our own custom models and fine-tune the models for CRM use cases and it really depends on what we’re trying to tackle. We use convolutional neural networks, for instance, when we’re working with image recognition use cases. We typically use our custom TM models which are recurrent neural network models for language use cases.

One of our big premises of democratizing AI is removing a lot of that complexity so really you get to focus in on the input data and knowing that we’re going to fine tune the hidden layers of the deep learning models so that you’re able to train, understand, and improve your models over time without getting bogged down by a lot of the complexity of neural network architectures.

Rene, what’s up with the green wig? 

RENE: So what we did was use Einstein Vision – specifically image specification – with Developer Keynote and for that we trained the system with specific models to detect if an aloe plant is healthy or unhealthy. We used a real aloe plant because me wearing a green wig mostly looks unhealthy or suspicious. So this was a background for having some fun in the Developer Keynote.

For people getting a little bit confused about what he’s talking about with the aloe wig, there’s a reference app that talks about this just a little bit. It’s the Pure Aloe reference app and it’s for people who want to play around with Salesforce. There are several reference apps that are available online so the Pure Aloe app has farming and retail dimensions to it. There are also other reference apps available such as the Dreamhouse app which is kind of a realtor situation, which if I recall correctly also uses Einstein Vision. Can anyone remember any other reference apps that cover Einstein Vision or have it encapsulated within it?

RENE: I think later we probably have time to show the Playground which allows you to incorporate or use all of the Einstein Platform Services from within an organization building Apex calls.

Where would people go to learn more about the Playground?

RENE: This repo library that I wrote for all the Einstein Platform Services – you can grab it from GitHub. It includes calls to all API methods. I would also like to make a call out to Shane’s repo. That’s a colleague of mine who did similar things. We did similar things in two different repos and we are currently working on merging that into one repository open source for everyone to fetch that. This will then also contain a ton of Apex classes that communicate with the Einstein Platform Services API. Also, something like this Playground where people directly, without any coding, can use image specification and detection and also Language services. I would also like to highlight that we have a ton of blog posts on our Salesforce Developers blog that introduce people either to the services and also to the Playground.

Are Einstein Language and Einstein Vision both part of Platform Services? Also, if we want to use lead scoring etc, then do we also need platform services? How does this all fit together?

MICHAEL: You aren’t bound by any one product to consume all of Einstein. You can really pick and choose what you need to do to accomplish your goals. Einstein lead scoring, for instance, is a feature you can turn on and train your models in a very automated way. It’s less of a developer tool and much more of an end user consumption tool.

You get the entire package of APIs delivered to you through a single platform. You sign up once and you have access to Einstein Vision and Einstein Language. You have a free account where you can train as many models and upload as much data as you want. You can test your models and retrain them. It really gives you a ton of flexibility until you start actually putting it into production doing thousands and thousands of calls on a monthly basis.

Now as you want to actually move into production and get into tens of thousands and millions of calls, we’re a very scalable platform and that’s where you actually have to start talking to your AE and purchase the actual service. But we really are big proponents of being able to test, evaluate, and prove out your use case and our ability to fulfill your requirements through the Einstein Platform Services. As you look into more developer-type use cases where you want to build a custom application, I really encourage you to sign up for Einstein.ai, kick the tires, train models and really use this as an opportunity to prove out your use case without making any financial commitments.

Everyone loves a tool you can play around with for free. When we do provide this, it’s not like we’re hampering people in any way, right? Are people are playing with the full power of the AI tool?

MICHAEL: Yeah, it’s a great point. It’s not one of those cheater “Freemium” services where you’re like playing a free game from the App Store and you get to Level One and it says “No, please pay or you can’t get to the next level” without buying an in-app upgrade or something like that. It’s really the full suite of services delivered to you on day one. The only real limitation would be the total amount of predictions, once you have a production model in place and you start making over 1000-2000 calls. We’re going to be expanding to 2000 calls very shortly on February 1st as we have just launched Einstein Language in Einstein Vision and those are 2000 predicted calls that refresh every month so you can test your model as much as you want. You can even put it into a closed loop group – maybe a few individuals that are going to be playing around with the POC. You might hit your 2000 limit and then the next month you’ll refresh and you’ll get to use 2000 predictions again until you decide that you want to upgrade past the 2000 prediction limit. But all these other services you’re seeing here where you’re gathering your data, training and retraining your model, reviewing metrics, testing your model in production – all of those are available to you on day one with no limitations.

We launched Einstein Vision last year. We’ve been in open Beta for Einstein Language since TrailheaDX. Now we’ve just gone GA with Object Detection and we’ve gone GA with all of the Language suite of APIs. We wanted to make sure that customers weren’t limited to one use case, that you were able to potentially build language models and vision models. Because it’s all delivered to a single account, you’d be able to access all of the services and be able to potentially work on a vision use case and a language use case – multiple use cases in same APIs – in parallel with double the amount of predictions per month for free.

What’s the difference between custom models and pretrained models?

MICHAEL: You can think of Sentiment as a great example for a pre-trained model for Einstein Language. Additionally we have three pre-trained models for Einstein Vision. One is a general image classifier that has over 1000 unique classes or categories, so you can send us an image and we will detect a main object with a high degree of accuracy. We try to give you out-of-the-box capabilities. Feed us text and we’ll tell you what kind of sentiment it is. Feed us images and we’ll be able to tell you a general image classifier. We have a food classifier and we have a scene classifier and those are our three pre-trained image classifiers.

We also want to give you the ability though to define your own custom model, so when we move from pre-trained to custom models, you get to start to define those output classes or those output labels or categories. So you know the example of sentiment is positive, negative, or neutral, but if you look at an intent or if you look at image classification or objection section, you define all of the outputs. You gather the data to support those outputs and what would be the most likely scenario that you expect to push your model to production. Those are the types of data you want to collect and then you’re able to train your model to output your specific customer-definite model. That’s org specific, it’s your model, and we don’t share any of the data with other customers. It’s really your ability to have the upper hand and the ability to really push artificial intelligence into your organization in a very business-specific way.

What are some best practices for collecting data?

ZINEB: Just before walking us through the best practices for collecting data, maybe I’ll walk us through the process that you go through to actually build a custom model, which Michael was referring to. You first define your classes. Language for Intent, for example, you can define your classes like this is a shipping use case – billing or product. For Vision, it could be “This is product A,” “This is product B,” “This is product C.” So once you have those classes defined, the second step is to gather your data. Basically you label it and once you have that, you simply upload that data using our APIs and then you make another call to train the model using that data set.

Once that’s done, you get a model ID and there’s really two cool things you can do with this model: The first thing is you can start getting predictions on whether that’s utterances or texts that the model hasn’t seen before or images if it’s an image model.

Then the other thing that’s really cool is that we provide you with model metrics. They allow you to understand how your model is performing and usually the first time there’s an iteration step here to improve your model but once you’re happy with how your model is performing, then you can simply integrate it with any workflow. It can be a Salesforce workflow or an external application. So that’s the process you go through to build your custom model.

Let’s look at the best practices for data collection: In general we say about 200 to 500 examples per label. This is really to kind of get started for a POC and it’s also really use case dependent. So if my products are very different or if my text categories are very different, then you’ll need more examples.

The other thing you want to keep in mind is in general you want to have a similar number of examples per label as well as a wide variety of examples to make sure that for each label, you’re covering the breadth of examples that you might be getting when you’re going to be using your model in production. Something else you want to consider is a negative label or “negative.” For example, if I’m creating a model that’s going to recognize if it’s tables and chairs but people might end up taking pictures of something else and you don’t want your model to predict it, then you’d add a negative data set so that it would be able to say “Other” for example. We actually have a negative label you can use to augment your data set for Vision.

Then the other thing is really just to get started. It’s easier than it seems so just go for it, start trying things out, and you’ll notice that you start getting really good results very early on. Just have fun and get started.

What are some best practices when I label my data?

MICHAEL: I think the best thing to think about is that we’re doing is supervised learning so you are the supervisor for the model. You’re in charge of ensuring that you have a quality data set through that data collection process that Zineb just outlined, but then you’re also making sure it’s labeled correctly. Depending on your use case, you might be able to leverage crowdsourcing to do some labeling or you might have to do some manual labor labeling yourself. We try to make this as simple as possible so you’ll see you can work in an Excel sheet or a Google doc to create a CSV.

If you’re doing a language-labeling task, you can very easily just collect your images into folders and label the folder as what you would like the model to output. If you’re doing an image recognition task, the old cliche is “Garbage in, garbage out” so it is important to really focus not only on the data collection, but the data labeling — that you’re labeling it in a smart way. I always think to start simple. As Zineb said, it’s easier just to get started and the best way to move forward is to define five classes that you want, five labels that you want the model to output. Collect maybe fifty examples just to understand how hard is the task you’re doing. Evaluate your model and then consider adding more data in and labeling that data accordingly to support it. As you evaluate your model, there’s a ton you can do to collect and label new data to support improvements.

Can I use Einstein with any external source?

RENE: Technically you can use the Einstein Platform Services with any data source. The thing is that you have to pull out the data of the data source. For example, text for Language Intent and then feed the Einstein Platform Services using our API calls. There are no built-in data connectors to any external services. You have to call the API and with that, for example, upload the data set.

Rene, do you want to describe Einstein Analytics really quickly?

RENE: Analytics is a dedicated offering that we have which analyzes large sets of data for charting, reporting, or predictive forecasting. There, we have built-in connectors for external sources like the PostSQL or MySQL database and so forth.

What should I do if I have no data but I want to start with Einstein?

ZINEB: For POC you might be able to manually collect the data to start and use that as a baseline. The other thing to take into account is that in the current workflow you have, can you use that as your data source? So if it’s an existing application you’re going to be integrating this with, is there a way to leverage some of the data that’s currently there? If it’s a new app, can you potentially deploy it ahead of time and start potentially gathering the data you train your model with and then integrate into the application?

Another approach to think about is if there are public data sets available that you could leverage or potentially could source some of this data collection through a third party and just seeing what is out there for what you could potentially use as starting points. It seems more daunting than it really is. We’ve had times where we’ve manually created some of these data sets ourselves and they’re good enough to kind of get a POC and start getting more resources to scale this up now.

MICHAEL: The cold start problem is always really daunting when you’re looking at it thinking a use case can be so valuable if only you had a data set to leverage. Maybe you’re not used to having sales reps or customers send you images. How can you prompt them to start sending you images and then how could you get the user to start almost consuming the application before it is actually in production, but add a feedback loop in so the end user of your application is a trusted source and can start actually providing you with that training data?

There are a lot of creative ways. Like I said, sometimes we will incentivize our own team just to sit down and label data, create data, go out in the field and take pictures. It’s always daunting until you get started and then you get to really evaluate it. AI isn’t magic but once you can create the model and start to show the value, I think people really get to see how valuable it can be for their businesses and will really come and start helping and aiding in your data collection and labeling in the future.

What background or skills do I need to use for Einstein Platform Services? Do I need to learn any machine language to use Einstein Platform Services?

MICHAEL: Democratizing AI – to do that right, we’re really trying to keep this as simple as possible, so it’s always smart to think about best practices and how you can get more familiar. Our documentation does a great job, so if you go to Einstein.ai and you click Documentation in the top-right corner, you’ll get a link to the Einstein Vision and Einstein Language docs which do try our best to guide you through it.

We also provide a forum online for all of our future customers to ask questions – our engineers, product managers, and developer evangelists are on there. I see Rene answering questions all the time. If you’re hitting a roadblock, always be willing to reach out to the community and leverage our documentation as best as possible.

But you don’t need to be a machine learning expert to leverage these. If you can make an API call and gather data, you’re 99 percent of the way there and you’re really at the point now where you can start testing and playing around with these tools to get you to a place where you’ll be learning as you go. I think everyone enjoys learning and developing their skillset and this is a great platform to do that.

There are also the Trailhead modules online. Those are huge resources to do a step-by-step walkthrough. We have a few already available and we’ve got more in our roadmap to be released this year, so it’s a great way to do continued development. Even if you’ve already used the platform, go back and run through a Trailhead module again and see how easy it is.

What do I do once I have collected and labeled my data?

MICHAEL: Once you’ve gathered and labeled your data, you’re really at the training process step now. So you want to make the API call to train your model, and then as Zineb pointed out, you’ve got two options now: You can start testing the model on your own, but what we also do is hold back 10 to 20 percent of your data from training. So the models never actually see the data, but once training is completed, we test it and you don’t get charged for predictions in that phase. But we’re going to test on that 10 or 20 percent of that data to evaluate your model and give you metrics to understand your model. That way you can figure out what stage you are at and the lifespan of this model – is it still very POC early? What accuracy numbers do you need to be able to push to production? What classes? What labels? What categories need to be improved in order for you to meet your goals and standards for this model?

How do I try Einstein Language in a demo org?

RENE: If you want to try that out you have to write some Apex code, so call the external API that resides on Einstein.ai and select your other – take one of the official examples from the Einstein team which are linked on the official documentation. Or you take, for example, my GitHub repo or Shane’s GitHub repo which contains all the Apex to call the API.

What if I want to improve my model? How can I make sure my model learns over time? Do you have any recommended best practices for this?

ZINEB: So Michael was mentioning that we provide model metrics for your models — we keep 10 percent of the data which you can adjust if you want more or less, but by default we keep 10 percent. With that, we then provide you with a test accuracy and training accuracy and your F1 score for image classification. Language will provide you with a confusion matrix for object detection and will provide you precision for class, so we really make it very easy for you to understand which classes are working well, which labels are working well, and which ones you need to iterate on.

Now, even once you’ve integrated your model into an application, you have the ability to do what’s called a feedback loop. So what the feedback loop allows you to do is basically once you can add in your workflow mechanisms for people to be able to correct any mistakes that the model makes, then you’re able to use those to retrain the model so that next time that it sees a similar example, image, or utterance, it’ll be able to correctly predict it. When you’re training, you can add false positives and false negatives to your data set to make your model more accurate. Then also, once you’re in production, add any misclassifications back to your data set to continually improve your model over time. When you do this, when you use what is called our Feedback API, you’re able to create a new model or update your existing model and in that sense you keep your same model ID.

MICHAEL: If you’re very early on in the iteration phase, you can just constantly be updating your existing model in which case you can just not reference that you not need a new model ID. Sometimes as you get a little further along, you may be adding a feedback loop in or you may be adding an untrusted source like a customer group you know well but you might not be willing to accept all of their feedback in which case you can always create a new model from your feedback loop. Then you can obviously compare the metrics between the two models and decide if you want to move forward with a new model ID or your existing model ID.

What’s on the roadmap for Language?

MICHAEL: First and foremost, we’re always thinking about how we can make our models more accurate and understanding our customer use cases. If you’re working on a customer use case and thinking “Gosh, I really feel like I’ve got a solid data set here. I’ve got hundreds of examples, I’ve got a very defined use case, but I want the model to perform better,” we always encourage you to get in touch with us either through our support forums or directly through your AE. We want to make sure our customers are successful with what they’re working on today.

That being said, it’s a very rapidly changing space in the artificial intelligence world and we’re constantly adding the latest technology to our platform. For Language in particular, we’re going to be adding a really amazing feature called Entity Extraction in the next coming months. It allows you to actually capture words and phrases within a document. So if you think of intent classification, we’re giving you document-level classification. We’re reading a paragraph of a page, a full page, or a couple of sentences – whatever you define as categories. What we’re going to do next is add an entity extraction. So if you say this customer service request is about Product XYZ, we could potentially extract out people, places, organizations, times, dates, and dollar amounts. Highlight those words and phrases and give them that label. So John Doe might have a problem with a support case he’s dealing with. He’s been online for ten hours and he paid $15,000 for this product and is pretty disappointed, so you can use our existing models to understand what the issue is and what the sentiment of the user is. But then we’re actually gonna say John Doe – this is potentially the product he’s on, this is how long he’s been in time-wise, this is the dollar amount he paid.

So those are entity extractions – the ability to extract out certain words and phrases and label them accordingly. That will be available in the next few months. We’re going to give you that flexibility so you’re going to be able to just define “Here’s examples of people talking about my products.” Maybe a product you’ve never even seen before, you’ll see it in a similar context and recognize it and label those words and phrases as specific products. If you’re interested in chatbots in our platform, it’s a huge feature that will be leveraged by our chatbot technology as well.

Additionally with Einstein Language, we’re really trying to update for not just English only, so you’ll see in the next months we’ll be adding new languages continuously. Thinking about European and Asian languages that we can add in to really expand your ability to serve more customers globally.

For Vision, I tried to mention as much as I could there. Improving the models and adding in more specific pre-trained models will be on our roadmap and then you’ll see hopefully some new specific features coming out mid-next year that take some more advanced use cases.

Just to compress all of this in a nutshell, Einstein is artificial intelligence with Salesforce. To get started, check out Einstein.ai. To learn more about it, go to Trailhead and check out the modules that we have and check out the Trailblazer Community if you actually have any questions.

We will be talking more about Salesforce Einstein Language and Vision APIs in our upcoming Spring ’18 Release and we’ll also have additional Einstein-focused webinars in the next weeks so keep an eye on Twitter and also we’ll be making a little bit of noise about this through email so keep an eye on your inbox to register.

If you have specific questions just for Michael or Rene, Tweet @MichaelEMachado or Rene at @muenzpraeger. Do we have any last words from anybody presenting – Michael, Rene, anything else you want to say?

MICHAEL: I really appreciate all the comments that are coming in. Hopefully you’ve all signed up for Einstein.ai because it is a free account. Start leveraging that through Salesforce – your account or Heroku. You do have the whole platform. I saw one question come up that I do want to get to – but you do have the whole platform available through Heroku. The Einstein Vision add-on, we’re still working on updating the marketing content to ensure it encompasses both Einstein Vision and Einstein Language, but you still do have the full suite of API that is visible to the documentation.

Awesome, great! Thanks for jumping on that last one. Everybody, thank you all so much for joining us.

Questions we couldn’t answer in the webinar, answered

What languages are currently supported and what is the roadmap for support for other languages?
MICHAEL: We’ll be running a closed pilot program to add new languages beginning February 2018.

What’s next on the product roadmap for Einstein Platform Services?
MICHAEL: Named Entity Recognition (NER) and new language support are top roadmap items.
ZINEB: Optical Character Recognition (OCR – ability to read text from images) is on the roadmap.

What are the costs for using Einstein Platform Services?
MICHAEL: It’s free to sign up with 2K predictions per month at Einstein.ai. You’ll need to pay to use more predictions monthly, but full functionality is available in free tier. To learn more about pricing, contact your account executive (AE).

What is the depth of the objects Einstein Object Detection is able to recognize? And what’s coming next?
ZINEB: We have seen good results with Einstein Vision recognizing between similar objects so it’s worth trying out plants, like in the Dreamforce ’17 Developer Keynote..
So far we have been able to address all “video” use cases by recommending customers convert video to still images and use Einstein Vision on that.

What is Einstein Language?
ZINEB: Einstein Language is the umbrella term for our Natural Language Processing (NLP) services. Currently Einstein Language contains two GA services: Einstein Sentiment and Einstein Intent.

Can Einstein be used in managed packages?
MICHAEL: It depends on the flexibility of the package.

How much knowledge of machine learning is required to use Einstein?
MICHAEL AND ZINEB: There are lots of docs and Trailhead modules to help get started if you have minimal background in machine learning.

What input data can be used? How is it fed and stored?
MICHAEL: You can use natural language or images. Einstein is a fully managed platform for storage, model building, and model serving.

What is meant by “training your model” and how do I know if it’s performing well?
MICHAEL: Training your model involves uploading data and making a train command through APIs. Model metrics are outputted after training and we document how to comprehend those.

How can I integrate my model?
MICHAEL: See the docs section about implementing model into an app. There are lots of code samples to help in the additional resources section of docs.

Where can we learn more about using Einstein Services (Documentation, tutorials, best practices)?
MICHAEL: Check out the Docs in Einstein.ai.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS