Research & Innovation

Inaugural Crawford Institute Grants Fund Business Technology Research

Research & Innovation

Inaugural Crawford Institute Grants Fund Business Technology Research

How can businesses leverage new technology?  

Generative AI has thrown this question into sharper relief than ever. The technology is more accessible than many that preceded it. It’s easier to implement at scale, too.  

But with that accessibility come challenges. Businesses face pressure to adopt AI lest they fall behind competitors. At the same time, how can decision-makers be sure that AI serves the long-term interests of their business? And — when making these kinds of decisions — where can they turn?  

That is where research projects funded by two inaugural grants from DePaul’s Dr. Curtis J. and Mrs. Gina Crawford Institute for Business Technology Leadership come in.  

One project looks at content-creating “micro-entrepreneurs”: another, at the physics of a large-scale industrial plant. Both projects offer critical insights for businesses of any size about the promise and perils of implementing AI. 

Khadija Ali Vakeel on Content Creators, the “Tipping Point” of AI Usage, and Keeping the Human in the Loop

Khadija Ali Vakeel, a young Indian woman, smiles in a fashionable suit

Assistant Professor Management and M.S. in Business Analytics co-director Khadija Ali Vakeel is studying how using AI can help — and hurt — online content creators.  

Driehaus College of Business (DCOB): You're looking at creators as micro-entrepreneurs. Tell us more about that! What makes this an especially timely topic of study?  

Khadija Ali Vakeel (KAV): Right now, side hustles are very popular. So many of us are capturing and creating content every day, in one fashion or another. And part of that is monetizing it.  

The content creator essentially becomes a micro-entrepreneur. They’re asking themselves questions like: How can I increase the performance of my videos? How can I increase my operational efficiency? How can I use the limited time that I have after my working hours to generate a second income?  

DCOB: How is this grant helping you expand your research?

KAV: This grant is a great opportunity for me to combine two areas that I’m really passionate about: business and technology.  

The technology aspect comes from AI and social media. That is the area that I’ve been working on and publishing about for the last few years. Even in the last few years, this area has changed so much. We’ve moved from limited e-commerce to thinking about these platforms as microcosms.  

The parallel aspect is business, which very few people think about — especially as a viewer. You don’t really think about how things are being monetized. But it’s really important for the content creators.

DCOB: In your initial research, you’ve identified a tipping point around the effectiveness of AI for content creators. Use a little AI, and creators see positive results. Use too much AI, though, and creators actually start to see detrimental effects on their engagement. Why do you think that is? 

KAV: From the content creator’s perspective, what AI can help most with is the first impression. Having a much more attractive caption, a more attractive title, transitions, thumbnails, effects — all these things make the post more attractive to the viewer.  And the algorithm drives a lot of that.  

But I think that if you overuse these AI techniques, what might happen is that the content becomes more and more homogenous. Everybody is using the same techniques; everybody is using the same transitions. As you’re scrolling through, you start to get fatigued. You’re seeing what feels like the same content over and over again. And I think that’s the tipping point.  

DCOB: You’re hoping this study will help out organizations as well as content creators. Tell me more about what these stakeholders can learn from each other.  

KAV: The most important stakeholder for this research is the micro-entrepreneur who is trying to monetize their content creation with the help of AI. I hope that one of the outcomes of this project will be some guidelines for creators on when and how to use AI.  

But I think this could also help organizations as they adopt AI. It could help them think about when to use it — and when not to use it.  

Most of all, I hope that this study will help them — creators and organizations alike — understand that we still need to have the human in the loop. The human touch and human creativity are essential — in organizational decisions just as much as individual ones. 

Eric Landahl on the Interface between Science and Business and the Messy Stories behind Neat Numbers 

Eric Landahl, a middle aged white man, in a home with light-filled windows in the background

Professor of Physics Eric Landahl is investigating how businesses make decisions using sensor data that may be less reliable than it appears — a growing challenge in industrial facilities where variables like temperature, humidity, and air flow are continuously monitored.

DCOB: Tell us about the sensors you study. How are they being used — and what can go wrong?  

Eric Landahl (EL): About 30 years ago, back when I was starting my career, we started releasing the same sensors we use in the lab to the business world.  

In the lab, this equipment is coddled. We store it carefully; each time you take it out, you calibrate it. But when we sent these sensors out into the business world, we did that without any qualification. We made them cheap and deployable everywhere. And now they’ve made their way onto the dashboards of business leaders who are making decisions based on them.  

We’re partnering with a real-world deployment facility through the Steans Center and the Spark Center.  There, we get to observe what’s actually happening to our sensors. In practice, sensors are often affected by everyday activity. Fans are placed in front of them, equipment is unplugged and later reconnected, or materials are stacked nearby. In each case, the sensor continues to report precise numbers, even though the measurement conditions have changed.

DCOB: How is your project intervening?  

EL: Over time, business leaders stop trusting this stuff — to their own expense and detriment. That’s what this project is out to cure.  

We have a group of sensors that we call integrity-aware sensors — systems that explicitly signal when their data should not be trusted. You can think of it like a stoplight system. Green means nothing has changed in my environment. Yellow signals that you might want to take a closer look. And red is when I’m going to make my failure loud; I’m going to stop reporting.  

The question is: how do business leaders respond when they’re suddenly told they can’t trust a measurement — especially if they get a red light?  

DCOB: Tell us more about the facility you’re collaborating with! What makes it an ideal place to do this research?  

EL: The Plant is a reimagined meat-packing plant. It’s got these huge, thick walls. It’s located in the Back of the Yards. It’s actually a sustainability project; it’s part of this push to redevelop a food processing capability for the modern age using this business incubator model.  

There’s people brewing beer; they’re making heat. There’s people growing plants; they need carbon dioxide — and they actually need heat from the brewers, so we’d like to shuffle the heat around. They film a popular TV series there on occasion, so there’s a crowd of people who show up and generate heat.

The idea is to develop a more sustainable way of shuffling mass and energy around the building. And you’re managing it all in this really dynamic environment that’s always changing.  

DCOB: Tell us more about how AI factors into the picture. How is the adoption of AI impacting this ongoing issue with how sensor data gets misinterpreted?  

EL: What’s happening now is that AI is being used to find the “bad” data points. AI comes in and looks for patterns. And it will just reject every outlier — everything that doesn’t fit the pattern that it’s interpolating.  

That’s dangerous — because when things go wrong, it’s often the outlier that matters most, and it may be exactly what automated systems discard.

And yet, in the business world, putting AI in the loop has become standard. So we’re trying to at least give them business leaders the information they need to do that responsibly. And I’m quite confident that — once they’re made aware of this problem and given the proper tools — leaders’ business acumen will kick in.  

DCOB: Tell us more about what it’s been like to work at the intersection of business and technology. What about this project are you most looking forward to?  

EL: This is an opportunity for me to learn.

Business decision-making often requires acting under uncertainty in real time, which makes this an especially interesting space for me to engage with. I'm anxious to learn more about the process of business decision-making and the transfer of governance from factory floor to office and back again. That’s another way that the Plant is nice; we can observe the decision-making in close quarters.

I’m not an engineer; I’m not here to build a better sensor. I am a scientist who’s interested in the epistemology of the problem. How do we know what we know?

At its core, this is about how decision-makers determine when they can trust the data they see. And as someone who builds measurement systems, I tend to think of instruments not as tools, but as part of the process that defines what can be known.  

logo-icon-blue.svg