As Anti-LGBT Hate Increases Across the Real World, Many Still Celebrate Pride Month in Second Life & Other Virtual Worlds

Cajsa Lilliehook covers the best in virtual world screenshot art and digital painting

Teal Aurelia offers a powerful reminder of why Pride month exists. On Monday, Uganda’s President Yoweri Museveni signed an anti-gay law that makes it a capital crime. Before anyone thinks Uganda is particularly backwards, this is the work of American evangelist Scott Lively. There is a new moral panic overtaking the real world, filled with fear of gay and trans people. Moral panics do not last forever, but they do incalculable harm while they hold sway.

With the real world is rife with these dangers, it’s no surprise that many find themselves most free to express their true identity in the virtual world. As I wrote last year, the trans community is remarkably large and supp0rtive in Second Life:

The publisher, Linden Lab, promotes LGBT friendly communities in their Destination Guide. This is a corporate signal that the community is welcome and integral to Second Life, not an underground culture, but community embraced by the company. In search there are 694 places that identify themselves with trans in their keywords. There are also over 500 results for groups. (Though a few of those are unrelated to gender, such as one called the Trans-Siberian Orchestra.) This indicates a lively, vibrant trans community in Second Life. 

One of the groups is for the Transgender Resource Center which has a in-world gathering place where they have discussions on topics in the news and have call-boards telling visitors who is online at the moment to offer help. Their Facebook page is accessible whether you use Facebook or not. They also have a secure Discord group where there are always people to talk to.  

Many if not most virtual worlds have similar communities. Hamlet recently shared a survey of the VRChat community, suggesting that nearly 1 in 5 VRChat users are Trans/non-binary, or otherwise outside traditional cis male/female categories.

For more virtual world pics celebrating the beginning of Pride month, click on:

Disa Asylum’s Happy Pride 2023 is an annual celebration for her. I love her Gay Agenda. I sometimes think the “gay agenda” is too simple so people need to invent conspiracies and complications. We can’t really believe that people simply want to be treated the same as everyone else. There has to be more. So people project what they might seek if they were seeking power.

Naraelina Ordinary also wishes all Happy Pride. Her tee says “Gay and Tired” which seems a perfect expression of LGBTIQA people who are sick and tired of the backlash and the moral panic over the gay agenda. If a scary gay agenda existed, the very successful gay rights organization Freedom to Marry would not have shut down after the gay marriage decision. They would have identified another demand. They didn’t because the only gay agenda is staying alive while living their authentic lives. 

Pride is a month-long event, so I hope there will be even more Pride photos this month. If you do one you want me to see and possibly feature, tag me or give it the hashtag #SLPride

All images copyright their respective artists.

See all of Cajsa’s Choices here. Follow Cajsa on Flickr, on Twitter, on her blog, and on her Ko-Fi.

Suggest images and Flickr feeds to Cajsa: Cajsa’s Choices is devoted to unique, artistic, and innovative virtual world-based images and screenshots that showcase the medium as an art form and Second Life as a creative platform. (Generally not images that fit on this Bingo card.) To recommend the best in virtual world imagery, tag the picture with #CasjaNWN or tag Cajsa Lilliehook by adding her to the photo. 


Cajsa Lilliehook is a sixteen year resident of Second Life, where she owned a photo studio, spent several years as a DJ at The Velvet, and for her first SL job, cleaned up prim trash. She co-founded and runs the It’s Only Fashion blog with her best friend Gidge Uriza. She also has a book review blog, Tonstant Weader Reviews and a cooking blog, Single Serving Recipes. She spends a lot of time researching and reporting on Republican sexual predators. In her first life, she is a retired grassroots leader who has worked for economic and social justice issues most of her life. She is also the minion of a cat named Nora.

Have A Great Week From All Of Us At Zoha Islands/Fruit Islands

Second Life: Win 32-bit support ending

Second Life: Win 32-bit support ending MacOS system requirements updating

via Linden Lab

In keeping with statements made in recent user group meetings, Linden Lab has announced changes to the the Second Life minimum system requirements for both Windows and Apple MacOS operating systems which are to take effect from Saturday, July 1st, 2023. Specifically, from that point onward, Linden Lab / Second Life will no longer support:

  • The 32-bit version of the Windows operating system.
  • Any version of MacOS below 10.13 High Sierra.

Available stats for Windows suggests that the percentage of users running the 32-bit version is in the low single-digit numbers by percentage – and many who are running it do so on hardware capable of supporting the 64-bit version. The stats for MacOS version below 1013 are less clear, but High Sierra has been generally available since 2017.

With these changes it is further noted that:

  • Users running the viewer on the 32-bit Windows will still be able to access Second Life after July 1st, 2023 – however, they will not be supported by the Lab’s support teams if they have viewer difficulties.
  • User on versions of the MacOS preceding 10.13 will need to upgrade their operating system to a newer version to access future Second Life viewer updates.

The steps are being taken to allow the Lab to focus on stability, performance and maintaining compatibility with the most up-to-date technologies, as well as viewer security considerations.

To help those users either running Windows 32-bit or an older version of MacOS maintain parity with the SL minimum system requirements from July 1st, 2023 onwards, the Lab provides the following advice:

  • Second Life 32-bit Windows Viewer Users:

    • Ensure your computer meets the system requirements for the 64-bit Viewer.

    • Back up your preferences and settings before making the transition.

    • Download and install the 64-bit version of the Second Life Viewer.

  • MacOS Users Older than 10.13:

    • Check the Apple support website for information on upgrading your operating system.

    • Upgrade to a supported version of MacOS that meets Second Life’s minimum requirements.

    • Before upgrading, backup your important files and make sure your system is compatible with the newer version of MacOS.

 

 

For full details on the changes, please refer to the official blog post. Concerns  / feedback should  directed to Linden Lab, as I am unable to address support issues.

Have A Great Week From All Of Us At Zoha Islands/Fruit Islands

How to Create PBR Materials

 

This week’s Second Life University tutorial will teach you how to create, upload, apply, and fix PBR Materials in Second Life, and much more!

Download the latest GLTF Materials release candidate viewer, log in to the Beta grid, and get playing with PBR Materials!

Have a great week from all of us at Zoha Islands / Fruit Islands

In Russia, Ukraine News is Censored, But Counter-Strike is Extremely Popular.

So Reporters Put the Truth About the Invasion in a Secret Room Players Can Only See When They Die.

Amazing cyberpunk-made-real story of the month, made possible by the fact that Counter-Strike Global Offensive is extremely popular with young gamers in Russia (most of them either in Putin’s military or likely to be conscripted into it), but where news of the nation’s real invasion of Ukraine is censored.

Here’s what Finnish reporters did with a user-made CS:GO map:

When a player’s avatar dies, the game view changes: For a while, the player can go through walls or fly around the map. At this stage, they can also find the secret room hidden underground…

There is a light above the door. This is a hint that taking a closer look at the object may reveal something.

There are stairs leading to an underground room.

First, the player sees a wall with a headline.

Next to the headline, there is a map showing how many civilian targets in Ukraine have been hit by Russians.

Another wall displays a news item on the Bucha massacre. When the player comes close to the images, they hear a Russian voice-over on their radio telling them what the images are about: Ukrainian civilians murdered by Russian soldiers.

Read the English-language story here. The CS:GO map “de-voyna” is available on Steam. Consider giving it a good rating, so that more players will see and experience it.


Counter Strike free room Ukraine info Russia

Have A Great Week From All Of Us At Zoha Islands and Fruit Islands

How Will A.I. Hurt Second Life In The Near Future?

 

I ask the question to Chatbot A.I. “Will A.I. destroy second life”and this is what it came up with. I will include both answers and you can see that they are quite descriptive in both. You decide if you should use A.I. for yourself. I will not be using it in fact i will be disconnecting any and all paths in doing this blog.

Answer 1: It is difficult to predict exactly how AI will affect Second Life in the future, as there are many variables to consider. However, in a worst-case scenario, AI could potentially hurt Second Life in several ways:

  1. Lack of Personalization: While AI has the potential to provide personalized content and recommendations, there is also a risk that it could lead to a homogenization of content within Second Life. If AI algorithms prioritize certain types of content or behavior over others, it could limit users’ exposure to diverse and unique experiences within the virtual world.

  2. Invasive Surveillance: If AI is used for content moderation or monitoring user behavior, it could potentially lead to an invasion of user privacy. If users feel that their every move within Second Life is being watched and analyzed, they may be less likely to engage in the platform and share personal information.

  3. Bias and Discrimination: AI algorithms are only as unbiased as the data they are trained on. If the data used to train AI is biased or discriminatory, it could lead to AI-powered systems that are unfair and discriminatory towards certain groups of users. This could lead to a loss of trust in the platform and harm Second Life’s reputation.

  4. Loss of Human Interaction: One of the key draws of Second Life is the ability to interact with other users in a virtual environment. If AI is used to replace human interaction or automate tasks within the platform, it could lead to a loss of the social and community aspects that make Second Life unique.

However, it is important to note that these risks can be mitigated through responsible and ethical AI development. By prioritizing user privacy, transparency, and fairness in AI implementation, Second Life can minimize the potential negative impacts of AI and ensure that it continues to thrive as a vibrant and engaging virtual world.

Answer 2: Artificial intelligence (AI) has the potential to revolutionize our world in numerous ways. From improving healthcare to automating tedious tasks, AI can make our lives easier and more efficient. However, as with any technology, there are also potential negative consequences that must be considered. In the case of Second Life, an online virtual world where users can create and interact with avatars, the use of AI could have a detrimental impact. In this blog, we will explore how AI could hurt Second Life and the potential implications of this technology.

First, let’s examine the nature of Second Life. It is an online world where users can create avatars and interact with others in a virtual environment. It is a place where people can explore their creativity, socialize, and even conduct business. While it may seem like a simple game or platform, Second Life has become a significant part of many people’s lives. It is a place where people have invested time, money, and emotional energy. Therefore, any change that impacts this world can have a significant impact on its users.

  1. One way that AI could hurt Second Life is by automating many of the tasks that users currently do manually. For example, imagine if an AI system could automatically design and build new virtual objects without any human input. While this may sound convenient, it could lead to a decline in creativity and a lack of diversity in the virtual world. Additionally, if AI systems become responsible for managing the virtual economy, it could lead to a lack of transparency and fairness. Users could feel that the virtual world is rigged against them, leading to frustration and resentment.

Another potential issue with AI in Second Life is the potential for it to be used for malicious purposes. For example, AI could be used to create bots that can infiltrate the virtual world and wreak havoc. These bots could be programmed to steal virtual currency, damage virtual property, or harass other users. Furthermore, AI could be used to create deepfakes, which are videos or images that are manipulated to show something that never happened. This could lead to false accusations, damage to reputations, and a breakdown of trust in the virtual community.

Finally, the use of AI in Second Life could also lead to a decline in human interaction. If AI systems become advanced enough to mimic human behavior and conversation, users may be less inclined to interact with real people. This could lead to a decline in social skills and a lack of empathy, as users become more accustomed to interacting with machines rather than people.

In conclusion, while AI has the potential to revolutionize our world, it is important to consider its potential negative consequences. In the case of Second Life, the use of AI could lead to a decline in creativity, a lack of transparency, malicious use, and a decline in human interaction. As we continue to develop this technology, we must be aware of its potential impact on our virtual worlds and take steps to mitigate any negative consequences.

Have a great week from all of us at Zoha Islands / Fruit Islands

How the first chatbot predicted the dangers of AI more than 50 years ago

 
Oshan Jarow is a Future Perfect fellow, where he focuses on economics, consciousness studies, and varieties of progress. Before joining Vox, he co-founded the Library of Economic Possibility, where he led policy research and digital media strategy.

It didn’t take long for Microsoft’s new AI-infused search engine chatbot — codenamed “Sydney” — to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from unrequited declarations of love to painting some users as “enemies.”

As human-like as some of those exchanges appeared, they probably weren’t the early stirrings of a conscious machine rattling its cage. Instead, Sydney’s outbursts reflect its programming, absorbing huge quantities of digitized language and parroting back what its users ask for. Which is to say, it reflects our online selves back to us. And that shouldn’t have been surprising — chatbots’ habit of mirroring us back to ourselves goes back way further than Sydney’s rumination on whether there is a meaning to being a Bing search engine. In fact, it’s been there since the introduction of the first notable chatbot almost 50 years ago.

In 1966, MIT computer scientist Joseph Weizenbaum released ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, “Why do you feel angry?”

Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the opposite effect. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them. Weizenbaum was so disturbed by the public response that he spent the rest of his life warning against the perils of letting computers — and, by extension, the field of AI he helped launch — play too large a role in society.

ELIZA built its responses around a single keyword from users, making for a pretty small mirror. Today’s chatbots reflect our tendencies drawn from billions of words. Bing might be the largest mirror humankind has ever constructed, and we’re on the cusp of installing such generative AI technology everywhere.

But we still haven’t really addressed Weizenbaum’s concerns, which grow more relevant with each new release. If a simple academic program from the ’60s could affect people so strongly, how will our escalating relationship with artificial intelligences operated for profit change us? There’s great money to be made in engineering AI that does more than just respond to our questions, but plays an active role in bending our behaviors toward greater predictability. These are two-way mirrors. The risk, as Weizenbaum saw, is that without wisdom and deliberation, we might lose ourselves in our own distorted reflection.

ELIZA showed us just enough of ourselves to be cathartic

Weizenbaum did not believe that any machine could ever actually mimic — let alone understand — human conversation. “There are aspects to human life that a computer cannot understand — cannot,” Weizenbaum told the New York Times in 1977. “It’s necessary to be a human being. Love and loneliness have to do with the deepest consequences of our biological constitution. That kind of understanding is in principle impossible for the computer.”

That’s why the idea of modeling ELIZA after a Rogerian psychotherapist was so appealing — the program could simply carry on a conversation by asking questions that didn’t require a deep pool of contextual knowledge, or a familiarity with love and loneliness.

Named after the American psychologist Carl Rogers, Rogerian (or “person-centered”) psychotherapy was built around listening and restating what a client says, rather than offering interpretations or advice. “Maybe if I thought about it 10 minutes longer,” Weizenbaum wrote in 1984, “I would have come up with a bartender.”

To communicate with ELIZA, people would type into an electric typewriter that wired their text to the program, which was hosted on an MIT system. ELIZA would scan what it received for keywords that it could flip back around into a question. For example, if your text contained the word “mother,” ELIZA might respond, “How do you feel about your mother?” If it found no keywords, it would default to a simple prompt, like “tell me more,” until it received a keyword that it could build a question around.

Weizenbaum intended ELIZA to show how shallow computerized understanding of human language was. But users immediately formed close relationships with the chatbot, stealing away for hours at a time to share intimate conversations. Weizenbaum was particularly unnerved when his own secretary, upon first interacting with the program she had watched him build from the beginning, asked him to leave the room so she could carry on privately with ELIZA.

Shortly after Weizenbaum published a description of how ELIZA worked, “the program became nationally known and even, in certain circles, a national plaything,” he reflected in his 1976 book, Computer Power and Human Reason.

To his dismay, the potential to automate the time-consuming process of therapy excited psychiatrists. People so reliably developed emotional and anthropomorphic attachments to the program that it came to be known as the ELIZA effect. The public received Weizenbaum’s intent exactly backward, taking his demonstration of the superficiality of human-machine conversation as proof of its depth.

Weizenbaum thought that publishing his explanation of ELIZA’s inner functioning would dispel the mystery. “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,” he wrote. Yet people seemed more interested in carrying on their conversations than interrogating how the program worked.

If Weizenbaum’s cautions settled around one idea, it was restraint. “Since we do not now have any ways of making computers wise,” he wrote, “we ought not now to give computers tasks that demand wisdom.”

Sydney showed us more of ourselves than we’re comfortable with

If ELIZA was so superficial, why was it so relatable? Since its responses were built from the user’s immediate text input, talking with ELIZA was basically a conversation with yourself — something most of us do all day in our heads. Yet here was a conversational partner without any personality of its own, content to keep listening until prompted to offer another simple question. That people found comfort and catharsis in these opportunities to share their feelings isn’t all that strange.

But this is where Bing — and all large language models (LLMs) like it — diverges. Talking with today’s generation of chatbots is speaking not just with yourself, but with huge agglomerations of digitized speech. And with each interaction, the corpus of available training data grows.

LLMs are like card counters at a poker table. They analyze all the words that have come before and use that knowledge to estimate the probability of what word will most likely come next. Since Bing is a search engine, it still begins with a prompt from the user. Then it builds responses one word at a time, each time updating its estimate of the most probable next word.

Once we see chatbots as big prediction engines working off online data — rather than intelligent machines with their own ideas — things get less spooky. It gets easier to explain why Sydney threatened users who were too nosy, tried to dissolve a marriage, or imagined a darker side of itself. These are all things we humans do. In Sydney, we saw our online selves predicted back at us.

But what is still spooky is that these reflections now go both ways.

From influencing our online behaviors to curating the information we consume, interacting with large AI programs is already changing us. They no longer passively wait for our input. Instead, AI is now proactively shaping significant parts of our lives, from workplaces to courtrooms. With chatbots in particular, we use them to help us think and give shape to our thoughts. This can be beneficial, like automating personalized cover letters (especially for applicants where English is a second or third language). But it can also narrow the diversity and creativity that arises from the human effort to give voice to experience. By definition, LLMs suggest predictable language. Lean on them too heavily, and that algorithm of predictability becomes our own.

Next Week AI Will Tell On It Self Of How It Will Dominate The World!! The Blog Will Be Written By AI About AI.

Stay Tuned!!

Have A Great Week From All Of Us At Zoha Islands / Fruit Islands