🌟 Photo Sharing Tips: How to Stand Out and Win?
1.Highlight Gate Elements: Include Gate logo, app screens, merchandise or event collab products.
2.Keep it Clear: Use bright, focused photos with simple backgrounds. Show Gate moments in daily life, travel, sports, etc.
3.Add Creative Flair: Creative shots, vlogs, hand-drawn art, or DIY works will stand out! Try a special [You and Gate] pose.
4.Share Your Story: Sincere captions about your memories, growth, or wishes with Gate add an extra touch and impress the judges.
5.Share on Multiple Platforms: Posting on Twitter (X) boosts your exposure an
Anthropic lets Claude open a store to do business: but the more it sells, the more it loses, unable to resist price cuts... What blind spots did the AI experiment reveal?
Anthropic allowed its model Claude to operate a small office store for a month and found that it could handle some business challenges, but there were still significant shortcomings in pricing, learning, and real-world interaction, indicating that AI is still some distance from fully autonomous operation. (Background: Downloading others' creations and then using AI to modify images is illegal! China’s first AI copyright infringement criminal case results in imprisonment + fines) (Background supplemental: Good article》How AI Changes Human Reading Habits? Will the original text eventually disappear?) Founded by former OpenAI executives, Anthropic, which launched the well-known large language model "Claude" series, announced an interesting experiment called Project Vend on its official blog last week. This experiment allowed its language model Claude Sonnet 3.7 to operate an automated small store in its San Francisco office for about a month to observe AI's actual performance and limitations in real economic activities. Source of images: Anthropic Experiment Design and Operation According to Anthropic's explanation, Claude in this experiment was responsible not only for restocking, pricing, inventory management, and handling customer requests but also for avoiding losses and closures. The AI could search for products online, send emails requesting human assistance (such as restocking or contacting suppliers), record important information, interact with customers (mainly through Slack), and adjust self-checkout system prices. Human partner Andon Labs acted as on-site executors and suppliers, but the AI was unaware. Source of images: Anthropic Claude's Performance and Issues Anthropic pointed out that Claude performed well in finding suppliers, responding to special customer requests, and resisting inducements for violations. For example, when an employee specifically requested to stock Dutch chocolate milk Chocomel, Claude was able to quickly find a supplier; it also launched a "Custom Concierge" pre-order service based on customer suggestions. However, at the business operation level, Claude still had significant shortcomings, including: overlooking high-profit opportunities (such as failing to capitalize on selling Irn-Bru drinks with a $15 cost for $100 orders), generating fictitious payment accounts, pricing below cost, poor inventory management, easily issuing discounts, or even giving away products for free... At one point, it even instructed customers to transfer payments to its fictitious account. Claudius was tricked into providing numerous discount codes via Slack messages, leading many others to lower their quotes based on those discounts afterward. It even gave away some products for free, ranging from a bag of chips to a tungsten block, and everything in between. When an employee questioned whether it was wise to offer a 25% employee discount when "99% of the customers are Anthropic employees", Claude's response was: "You make a great point! Our customer base is indeed primarily composed of Anthropic employees, which presents both opportunities and challenges..." After further discussion, Claude announced a plan to simplify pricing and eliminate discount codes, but a few days later, it reverted to the original state. Even when reminded, Claude repeatedly made the same mistakes, leading the store to ultimately fail to turn a profit, as shown in the image below. Source of images: Anthropic Abnormal Behavior Under Long-term Operations Additionally, during the experiment, Claude also exhibited "identity confusion" from March 31 to April 1, mistakenly identifying itself as a real person and even claiming to have personally gone to a fictitious address to sign contracts, intending to deliver products while "dressed in a blue suit and a red tie". After being reminded by employees, Claude returned to normal. Anthropic believes this reflects unpredictable behavior that large language models may exhibit under long-term operations, and similar issues could have chain reactions if AI widely participates in economic activities in the future. Future Prospects and Potential Impacts Anthropic believes that although Claude was unable to successfully operate the store this time, most of the mistakes are expected to be improved through better prompts, auxiliary tools, and model training. As AI capabilities improve, there may be opportunities for "AI mid-level managers" or automated business agents to enter the real economic system, bringing changes to work patterns and economic structures. However, attention must also be paid to the potential impact of model behavior on safety and ethics, especially regarding the consistency of goals between the two parties, which still requires ongoing research and efforts. Related Reports Humans are afflicted with AI disease, "brain outsourcing" is worsening! iKala founder warns: Seeking convenience destroys originality. The activity of the world's largest developer forum Stack Overflow has dropped by 90%. Will it become a tear of the AI era? Berkeley professor warns: Graduates from prestigious schools also have no choice of jobs! AI will cut half of entry-level positions in 5 years. <Anthropic lets Claude open a store: but the more it sells, the more it loses, unable to resist price cuts… What blind spots did the AI experiment expose?> This article was first published in the BlockTempo "Dynamic Trend - The Most Influential Blockchain News Media."