Industry Perspectives on Data Science – Q&A

This blog is taken from the Quantifi webinar ‘Next Generation Risk technology Powered by data science’ featuring Celent. The panellists, Cubillas Ding, Research Director, Celent and Avadhut Naik, Head of Solutions, Quantifi, were presented with a number of questions from the audience. ‘The views and opinions expressed in this blog are those of the individual […]
14 May, 2020

This blog is taken from the Quantifi webinar ‘Next Generation Risk technology Powered by data science’ featuring Celent. The panellists, Cubillas Ding, Research Director, Celent and Avadhut Naik, Head of Solutions, Quantifi, were presented with a number of questions from the audience.

‘The views and opinions expressed in this blog are those of the individual and not of the companies they represent. As this blog has been transcribed from the webinar recording, there may be minor differences.’

What would your advice be for firms intending to start their journey to employ date science tools?

Cubillas: I think there are a few things. The obvious one would be to start small and iterative, looking at self-contained areas to socialise as well as build expertise at the same time. If you’re going to select different use cases, you’d want to involve some of the business users to think about how to prioritise the highest ROI use case – one that’s preferably useful and aligned with the front-office. It’s also about the process of getting people involved in building expertise and thinking about how the specific use case will actually benefit the business as opposed to doing more back-office stuff that may not add much value. That would be the immediate starting point. Then, you may need to identify the constituents’ level of expertise and decide how low level you want to be in terms of the environment. With the data science offerings that we looked at – low code, no code and business-friendly coding-type environments – we’re actually thinking about the level of expertise of those using the environment and what might be helpful.

What are Quantifi’s plans in data science?

Avadhut: Quantifi comes from an analytics background. We offer risk and trading solutions as well, but our genesis was as an analytics provider. Analytics is in our DNA. We have always had tools like Excel and MATLAB, which could integrate with our models and data and give the analysts and quants the flexibility to access and analyse Quantifi analytics outside of Quantifi. This is just a natural progression for us. We are now expanding the scope. As Cubillas said, Excel is not going anywhere. However, there are a lot of other tools available on the market that can work with Quantifi data and Quantifi analytics and then combine them with different data sets and data streams, which can be sourced externally, as well as different algorithms like machine learning. Currently, we’re making all of our interfaces pythonic, so they can be easily accessed from Python. We are already engaging with clients on visualisation and some risk analytics components that they are using. The end goal is to offer an integrated self-service data science platform based on open source technologies. This would help clients compose risk analytics from different data sources and plug in models from Quantifi as well as third-party models. It would have built-in data and model governance and would offer our clients an environment for fast cycle of model development from experimentation to production. That is the goal. That is what we are driving towards.

Which clients at Quantifi benefit directly from the data science approaches and tools that you employ?

Avadhut: Coming back to the some of the use cases that I previously discussed, it’s currently more to do with visualisation and BI – integrating Quantifi analytics and Quantifi risk analytics with data from other data streams and presenting it in third-party visualisation tools like Power BI or tableau. Number two is back testing. The use cases that I presented earlier are not just theoretical use cases. We actually have clients that are using our analytics in those use cases. For the back testing, you have trading strategies back testing, which our clients are benefiting from. As far as portfolio construction goes, we have our partnerships with AI firms. AI firms and machine learning firms are using us for portfolio construction using bond price forecasting. Another example is a client who is using us for market-making operations, in other words, using AI to help bigger banks make markets more efficiently.

Can you share your experiences of the pitfalls to avoid?

Cubillas: I mentioned earlier about standardising the analytical and data science development stack as much as possible. In a typical environment, it is usually an organic process in terms of the bottom-up selection of tools that the quants want to use. In the longer run, firms may benefit by standardising the stack as much as possible. It has become easier now that Python has sort of become the de facto standard. The chief data officer could act as a conduit in terms of facilitating and steering the standardisation of tools based on the organisation and could use case alignment across different groups that may need quantitative tools like this. You may not standardise to only one tool, but at least then you have a set of tools that are fairly standardised from which the firm can use or pick and mix. Otherwise, the organisation could degenerate into using fragmented tools with no standardisation, resulting in skill set pollination. I think that is one thing to be mindful of if you are starting the journey. Think about a set of tools limited to a certain group based on the strengths and weaknesses of the tool sets themselves.

Part 1: How is Data Science Transforming Banking and Capital Markets?

Part 2: What are the Use Cases for Data Science in the Financial Markets?

Let's talk!

Speak with one of our solution experts
Loading...