In conversation with DANI ROSEN
In conversation with DANI ROSEN
transcript
Can you introduce yourself, please? What do? What's your name? What does your job role involve? That kind of thing.
So yeah, my name is Dani. My pronouns are she/they, and my job is well, I'm a writer, a content marketing writer. And at the moment I work for a tech company, which offers online learning for children in key stage 2. So I write for a B-to-C audience, so I'm mostly writing for parents every day. And because it's content marketing, it's really focused on writing things that will be useful and relevant to parents. It's not really about writing promotional stuff purely, it's about writing info that parents are searching for, and then kind of weaving in a tiny little bit of product promotion. So yeah, mostly focused on writing blogs and useful resources.
Nice. Do you see artificial intelligence already interacting with your job role? Is that something that's kind of already being used by people who are in your area, or in similar areas, or is it a bit of a foreign concept?
I would say it's like starting to interact with- definitely as a writer, there's a lot of talk about AI and what it can do, and Chat-GPT especially. Like people saying ‘oh, did you know you can get your blog post written by AI now?’ And that makes me kind of nervous because it's like everyone knows about it, everyone's talking about it. But obviously if this thing works really well, then what does that mean for our role? What does that mean for my career? So yeah, it's definitely something that, it feels like it's really ramped up since maybe towards the end of last year, like coming up to Christmas. Suddenly the height, it was everywhere. Before that AI wasn't really that - it couldn't really write convincing or ‘human-sounding’ content. But now I feel like it's suddenly, out of nowhere, it’s suddenly like banging at the door. Like ‘I'm going to take your job!’.
Yeah, so I was talking to an artist yesterday about them using Chat-GPT. So they're dancer, they're using it to create poetry, and write in collaboration with it, to create poetry. It was so fascinating.
That’s amazing.
But a lot of people are kind of adopting models like Chat-GPT, or something similar, and using that in a collaborative way, or using that in different ways to interact with what they're doing. I'm also going to talk to tech professionals as well to really kind of understand what the hell is going on. And a lot of people are calling it the “AI revolution”, like you said, like, it feels like a very recent kind of increase in this kind of accessibility and awareness of what it can do.
This is what is kind of interesting for my role as well, the main thing I'm focused on is helping parents to find us through Google. So SEO. And obviously, Google's algorithms are kind of constantly adapting to make it harder and harder for spam-mongers to get to the top of the search results. And this is just the latest thing, right? Like people generating AI content to use them to get to the top of search results and promote whatever they're doing. And I'm really hoping and expecting, I don't know much about it, but I'm hoping that Google is going to get more sophisticated about being able to actually detect content that has just been churned out by AI and being able to tell whether it's been written by a human or not. And then it will maybe start to reward content that's been written by humans. I don't know if that's possible because some of the stuff that people have shown me that's been written by Chat-GPT, I'm like, that's scarily good.
Yeah, so I read a report that has said that 84% of marketers are now using AI, which is up by 29% in 2018.
Oh my god, OK 84% I'm missing a trick.
Which is just mind-blowing. And another very scary fact, 79% of marketing teams report revenue increase after the adoption of AI.
Nooo!
So I feel like a lot of people are feeling very, like you were saying, it's scary. It makes people feel nervous. People are feeling kind of - perhaps threatened, or kind of overwhelmed by it. Where are you standing with it at the moment, in terms of, do you use it? Are you interested in using it and learning about it? Or is it something that you want to avoid? Like you were saying kind of, Google, or similar places, could reward that human work? Is that something that you want to stick with, or is it something you know, perhaps there could be a blend or something? Where are you at with it?
I think, so at the moment, I'm very much, I'm avoiding it. I kind of like, almost slightly don't want to think about it, but I want to get to a place where I can actually harness it and use it. But also I don't want to rely on it completely, I think some people probably think that marketing can be done entirely automated, and I just don't think that's the case. I don't think it will ever be the case. So at the moment, I haven't really- I suppose it also depends on what we're classifying as AI as well, like there's definitely tools that I use that help me with writing.
I was talking to someone yesterday about the idea that, because you're feeding this AI only things from a human perspective, it can only ever have that human nature, and outlook. Because that's the only input that it’s ever going to get.
Right.
That it feels like it's more like holding up a mirror to what we're creating already. So yeah, like you were saying, there's kind of tools that are already existing and have for a very long time, AI is already very integrated into our work. But there seems to have been a lot of fast changes recently.
Yeah, it really does. So the tools that I use that I'm really comfortable with, fully embrace, really help me to do my job. It's stuff like editing, basically, stuff that does the job of a really, really skilled human editor, that will highlight to me where I could make my phrasing a bit more simple, or where I've used a convoluted phrase, and I could swap that out for something else. I've been using that for years and I love it. It's brilliant. And I don't know, I guess from what you're saying, that it's kind of technically a type of AI because it's doing the job of- a human could do that job. It's automated. What has suddenly seemed to be creeping up really quickly is like, AI is actually creating the content, and it's not being created from nowhere because it's got human input at some stage. But that's the type of AI, which I’ve not used, and I've not tried to use because I'm scared. I think what it is, is well, it's partly just being busy, right? It's partly just being like, ‘oh, I'm churning out so many blogs, I don't have time to stop and think, could I do this more efficiently by using this technology?’ But it's also that a big part of me is scared that I will try it, and it will be better than me. That just like invalidates everything I've ever done. Which is wild. I didn't realize that until just now, but I am, I'm definitely slightly avoiding it through fear, perhaps.
So, in terms of your job role, would you define it as something that is a part of the creative industries? I feel like content marketing, content creation, there seems to be a lot of stigma around those kind of roles, which I do feel like perhaps is decreasing as they become more popular and more necessary for organizations to survive. I think a lot of people who are outside of that world, really struggling to understand what it is, what they do, whether that falls into marketing, or communications, or ‘creative’. How would you define what you do?
That's a really good question. I think I'm kind of on the cusp. I'm like adjacent to the creative and arts world. I guess I'm a creative professional because I write, but it's also heavily marketing focused. I do have targets. I've got to get a certain amount of people to our website every month, whatever, and I've got to do that by writing about things that are relevant to them. So, there's like a lot of marketing strategy behind it. It's like on the cusp, I would say, of an arts role.
The reason I ask, actually, is because there's so many conversations, especially in visual arts right now, which I could say is probably pretty applicable to writing, is this idea of kind of ownership over work. And I feel like even if you're in a role that is perhaps deemed less creative, even like accountancy or, for example search engine optimization, or more data-based kind of content strategies; what makes it our work is that ‘humanness’ to it. And that, having been in the arts for a while, that could also kind of correlate to that creativity; that human input that work seems almost synonymous with ‘creativity’. With this kind of increase in awareness and accessibility of artificial intelligence, I think a lot of people feel threatened, or overwhelmed, by this idea that we could lose that autonomy over our work and we could lose that ownership, or even, like, our credit. Like you were saying, when there's so many people who are doing it so quickly with AI, with perhaps not as much knowledge, or learning, or the years that it's taken other people to get to that place, you feel that sometimes there's a lack of respect for the work that you do, or the autonomy that you have over your work. How you feel about that, and that ownership over what you do?
I've definitely felt that strongly within the last few months, like, me and my colleagues as well who do the similar role to me, because I think it's even more heightened because we work for a tech company. So the people are quite interested in the latest tech developments and stuff. And there's so many times that someone who doesn't work in the marketing team, they don't do any writing as part of their job, they're maybe like a developer, app developer or whatever, they'll post an article that's like, ‘oh, did you know that like our blog post can be written by AI now? And it's really cool’, and we're kind of over here thinking like, well, ‘hello, like- the company does employ people who actually do write this, and it's taken us like ten years to build up our skills to do this’. So it does feel a little bit like there's definitely a sense of anxiety and a sense of– I’m trying to think of the right way to put it- concern about like, about lack of respect over what we do. Like, maybe concern about losing that kind of almost “prestigious” feeling of, like, yes, I have these special skills, and look at all these things I've created, and there's a sense that that could be taken away from us. Because AI will mean that anybody can do it.
Yeah, no, I get that.
But I don't want to be like the kind of stick in the mud, like, ‘oh, this is terrible, it's the end’, because so many people throughout history, their jobs have been taken away by automation. And I didn't care the whole time. I was like, ‘oh, get with the times, keep up’. And now it's coming for my job. I'm aware of my own hypocrisy, being, like, suddenly feeling the panic.
Yeah, no, I get what you're saying. I'm surrounded by artists constantly. And we've been told basically our whole lives that we are the top job least likely to be taken over by AI, in every study, the top role. And then in the last couple of months, they started creating like generative image programs where you can just type in words and they make a painting. And all of a sudden, everyone, everything we've been taught and told that we're safe, all of these soft skills, we can apply them to any industry, whatever; suddenly it's kind of been like, oh my God, what is happening? And it feels it's scary, it's nervous-. But there is that kind of, like you said the word ‘prestige’, prestige to it. You work so hard to get to where you are, and then it's almost undermining that work and that effort and the skill, the skillset that you've brought and grown for so long. I think that is a huge part of it as well.
Definitely feel that. It's really interesting, obviously I know about the kind of image generation stuff, but I've never really thought about how that's affecting visual artists. And it's interesting that they're going through a similar thing to writers at the moment. You're so right about us being told that no one can replace what we do. It's impossible. A machine can't do what you do, and now suddenly they can. Or they are starting to. It's a wild place to be in.
Yeah. Could you see, in the future, it being a positive thing? Because there is so much like, and rightfully so, so much fear, and worry, and anxiousness. But I also feel like there's a lot of fear mongering going on, almost every article you open is like ‘you're going to lose your job’, and it’s exhausting, and it is probably absolutely horrendous for people's mental health.
Yes.
And I'm trying to reframe it, or consider a perspective where it could be more positive, in a balanced way. Do you find that there are ways in which it could be seen as a positive and how might that manifest? In an ideal world, what would your goal look like with the incorporation of AI?
Yeah so I think, to keep the kind of, I guess for want of a better word, prestige element - and I'm not talking about purely my ego, I'm talking about people having a livelihood and protecting their livelihood. I think it would be really cool if these kinds of creative, or marketing roles, pivoted a little bit to where you're like the ‘curator’ of the AI content generation.
It's not like they could fire the whole marketing team and just replace it with Chat-GPT because they'd get a whole bunch of complete nonsense. But they could get, you know, somebody like somebody like me could be responsible for the strategy of knowing what to generate, knowing what to program essentially, and when to release it. Although the scary thing is probably AI can tell you that too, can't they? It can tell you when you need to post this, and to who you need to target it. I guess I don't really understand enough about how it can work, but some sort of curator. And that's still difficult because like a lot of people would lose their jobs, especially at a lower level. The people who you might get, you might say like, ‘oh, could you draft me a blog post on this’, and then I would refine it. That person, they might still lose their job because if Chat-GPT is now doing the initial draft and I'm just kind of like organizing it, managing it, I still don't think it's an ideal scenario.
Thinking about what you say, about all the doom and gloom that we read about, like definitely it does harm my mental health to read about like ‘your careers are going to go, there's going to be nothing left for you’ and it's all scaremongering. And I think what would be a really nice angle for once, would be to read about, like ‘hey marketing experts, here's five ways that like AI can help you’, or like, ‘here are the things you can look forward to over the next few years about how AI is actually going to make your job easier and more efficient’, not about how it's going to replace you. And I guess there are still going need to be some human roles. They're just going to need to be different, aren't they? Knowing how to work with, and use that technology, rather than necessarily creating everything yourself from scratch.
And seeing it more as a collaborative tool, like a contractor, an outsourcing person that you go to and say, help me with this - I don't have the capacity for that, in my huge busy schedule - all the people underneath me are busy, blah, blah, blah. It was interesting what you were saying about people who are in lower roles, who are likely to be in a lower socioeconomic, kind of, state in life.
Or just starting out their career.
Yeah, absolutely. The people who are perhaps more vulnerable in the “career ladder”, in whatever industry, are the ones that are going to take a lot of the hits, if this does progress in the way that people are scared of.
That would be my concern. I might almost be able to pivot it into something with like 10 years-whatever, experience under my belt. I might be able to say, I might just about be okay, but like, people who well - another thing, what you said about like outsourcing, I think that's a really good way of looking at it, a really good point. Like for me, for example, last year we worked with like a whole bunch of freelancers who were really difficult to work with. They were really scatty and kind of just like, some of them their writing quality was pretty bad. And then I would spend longer editing the article, than I would writing it in the first place. And then we pay them per article. And that’s the kind of thing that something like Chat-GPT could do, obviously for free, and without the mistakes. But then those freelance opportunities wouldn't have been there for those people, and they wouldn't have had experience. And they were definitely people- I think there were students, some of them. So my end of things would have been easier using AI for the company, but the actual people who gained that experience, they wouldn't have been able to. So it does concern me a bit.
You are not just a content marketing manager. It's not all that you are, and it's not all that you have been. If I'm correct, you've worked in mental health and have done media and editing for a while, but you also studied English Lit and Language at uni.
Well, I've always really liked reading, first and foremost was reading, hence doing an English degree. But then I discovered through that I actually prefer the language side of things and find that more interesting. I really enjoyed learning about all the language modules I did, about like how we communicate and how language is used for different purposes, bad or good. And I knew that I wanted to go into something involving writing. And then I guess- I guess I really wanted to go into mental health, because mental health is like a big passion and really important to me. But it also took me years to figure this out, but I also don't actually like working with people that much. Like, I prefer to be left on my own to be a bit creative. I'm happy working in a team, but I wouldn't be happy in a job where I had to speak to people. Like my schedule was booked all day, like a therapist or something, who their job is meeting with people all day, I would get burnt out very quickly. So I thought I had to figure something out that would allow me to work in the mental health field, but doing something creative, not something person-centered, or person-driven. Like person- I don't know what the word is,
Yeah, interfacing with people-
Yeah, that’s it, that’s the word. But that's a really hard job to get, because there are not many mental health nonprofit organizations, the ones that are, are in London. So I did jobs that were doing the kind of work that I wanted to do, but not on the subject that I wanted to do.
In terms of your previous work in different industries and different fields - it doesn't have to be perfectly thought-out idea, could you see artificial intelligence interacting with, for example, something like mental health charities, or anything else that you've done? And if you had to talk to the people that are creating these AI softwares, what are things that you would want them to consider?
That's a really, really interesting question. I guess to answer the first question, I could definitely see AI working in any of the fields that I've been in. So like, AI could easily write an IT book, it could easily write a corporate report. I mean, half of that is just like recycled nonsense anyway. And in terms of mental health, so the mental health company that I worked for, was a training company. The main kind of content creation was writing the manuals that you would get on a course. I think that would be kind of, there would be risks to it, obviously. It would still really need to be checked, and kind of moderated, and curated by a person who had that sector expertise and knowledge, someone who's trained as a therapist or whatever. But in the same way that anything that I wrote, would need to be checked by someone with mental health qualifications. Yeah, it could definitely be applicable. In terms of like actually providing support to people, this is kind of a bit of a tangent I guess, but providing support to people with their mental health, I actually think that it could do that. But it would have to be a very first line, the first thing that someone tries. And there has to be someone there on the other end, ready for if it needs to be escalated for whatever reason. Because the real danger would be like if people are turning to Chat-GPT for mental health support, which I think they are, I've heard that people are having to do that, because waiting lists are so long, people can't get to see a therapist on the NHS. And in the meantime they need to talk to somebody. They can't afford to just go private. It makes perfect sense that like some sort of chat tool could be used for that, if that is helpful for people, and if that is supportive for people. But it just needs to be so carefully regulated. Yeah, there definitely would need to be a real human monitoring it, because I can just imagine the absolute disasters that could happen if not.
Your second question, I really like, so what would I want the makers of this technology to consider? I guess I would want them to just be careful and be cognizant that it’s not a safe way to replace everything straight away. Kind of in the same way as self-driving cars, they're not foolproof, they do crash. And that's not to say that we shouldn't explore that technology, but the makers of them shouldn't be overconfident. And I think that the makers of these kinds of AI technology shouldn't be overconfident. They shouldn't claim things like this can replace your entire staff, or this can offer mental health support, or this can- they should be very clear about what its limitations are.
Yeah, I agree that there is a transparency that is needed that isn't really there, or it feels like it's not there at the moment, in a lot of places. All of this artificial intelligence is being developed, and at least what I'm seeing, it's often being applied with the intention of more efficiency, more economic growth, more whatever. And perhaps that’s why there's such a narrative where the “workers” are scared, and are feeling threatened by something that seems to just be putting capitalism on steroids.
Basically, yeah.
And on that note, actually, many AI tools are monetized, and are now privatized, and you have to pay to access them. And then there are things like, perhaps the most talked about one, is Chat-GPT. Perhaps because it is so accessible and free to use. You can apply for a subscription, but at the moment that's just for faster response speeds and priority access. But tools like that, like Chat-GPT, like image generation software, or other tools that could be used in any field, for example marketing, do you feel like they should be privatized, or do you feel like they should be public access the same way that Google might be?
I would fully expect them to be privatized just because of like, capitalism. And I'm surprised that Chat-GPT is still free. I guess like, I could see them operating on a ‘freemium’-kind of model where like you get a really basic model free, and then companies have to buy a subscription, and maybe there's like a discount for charities or whatever, that type of model. But the idea of it being free to all, like Google, is interesting and maybe kind of scary, but I don't really know why. Like, I don't have a good answer for why. In a way, it would remove a lot of kind of privileges of like, you know, how the written word is privileged, kind of- it's owned by the privileged, right? And not everyone has had the education or the tools to be able to use it, to be able to harness it. And that can massively disadvantage you. So in theory, having open access to something that can write really well for you, could really help to break down barriers for people. But it's really interesting to me how that thought immediately scares me. Like the thought of everyone having access to it scares me, because then it's just like anyone could write anything about anyone. The stuff to do with deep fakes scares me a lot. Especially when it's used against women for revenge porn and stuff. I don't know. I don't have any good answers as to how I think it should be regulated. I guess there's just pros and cons.
Yeah, there's an implication that with access to that, anyone could be anyone, and they could be spreading any information anywhere to-
Convincingly.
Convincingly to vulnerable people.
Yeah, that's it. I think it adds that level of convincingness that not everyone would be able to just come up with on their own. Another thing that scares me is the idea of AI generated news articles, because I know that that exists. There are those really spammy news sites, but like you can immediately tell that they're just AI generated because they read absolutely terribly, they don't make any sense. Whereas if, like, that gets more and more sophisticated, and the news that we're reading has not been written by a human at all - but then again, like, why does that scare me? Why is that more scary than a human bias? I don't know. I'm glad we've had this conversation. This is forcing me to really examine some of my preconceptions.
It's very interesting. There are a lot of conversations happening at the moment that, because artificial intelligence can only be fed certain data, there are artificial intelligence softwares that are exhibiting biases, political leanings, or social, ethical preferences absorbed from people, because all of the content they’re absorbing is just a reflection of our media, and our data that we’ve created as a society. Which is a very scary thought.
Yeah.
Like you were saying with that, kind of- as it becomes less detectable and more sophisticated, how do we stop those biases and how do we detect them on an individual level? Everything that you're taught in school about finding biases in newspapers, finding plagiarism, blah blah, blah. All of a sudden, that needs revamping hugely.
That's a really good point, because you obviously learn about- and I think kids are taught it earlier now than we were taught about it, which is a great thing. But who's written this piece? What is their agenda? What motive might they have for portraying, for writing it in this way? Like that needs to be now, kind of yeah, I think educators need to be really on it. They need to be really kind of thinking about- and not just educators. It's not on individual teachers, but it's on the government and it's on lawmakers as well, to really understand and keep up with what the implications are going to be, so that kids are taught how to think critically. Because my fear would be that AI that absorbs this kind of like Internet racism, or sexism, transphobia, whatever; when something's fed through an AI, will people start to see it as like, oh, this is unbiased. This is a source of truth, when actually it's just as biased as we are, because it's regurgitating our-. So, yeah, I think that needs to be kind of, people need to be vigilant against that.
Yeah, definitely. Yeah, that's it, I think.
There are definitely going to be good things that come out of it.