Polymathic

Digital transformation, higher education, innovation, technology, professional skills, management, and strategy


  • ,

    Article analysis: Computer use (beta)

    Article analysis: Computer use (beta)

    “The computer use functionality is in beta. While Claude’s capabilities are cutting edge, developers should be aware of its limitations: latency, tool selection accuracy, and vulnerabilities.”

    Computer use (beta)

    Summary

    The article discusses the Claude 3.5 Sonnet model from Anthropic, focusing on its ability to interact with a computer desktop environment through the implementation of tools. The central premise of the article is to explain how this model facilitates computer use by leveraging various Anthropic-defined tools via a Messages API. It presents an “agent loop” process where Claude autonomously performs tasks through these tools, aimed at executing repeatable computer activities. The article emphasizes starting with a Docker-contained reference implementation that includes all necessary components such as tool implementations and a web interface, suggesting that users follow specific prompting techniques to optimize model performance. These include the use of explicit instructions and screenshots for verification to ensure each task is correctly executed. The documentation also acknowledges the model’s limitations, such as latency issues, inaccuracy in computer vision, and potential vulnerabilities in its operations, recommending its use in secure environments with oversight. Furthermore, the article outlines the pricing model, relating it to standard Claude API requests, and specifying the token counts for triggering computer use features. The article underscores the importance of using this technology prudently, especially concerning sensitive data and legal considerations, due to its potential to engage in unauthorized actions if not monitored closely.

    Analysis

    From my perspective, the article about the Claude 3.5 Sonnet model presents a promising glimpse into AI’s capability to interact with computer environments. One strength is its clear explanation of the model’s “agent loop” and structured implementation guidance. However, while it introduces fascinating possibilities for AI-assisted computer tasks, it underrepresents AI’s broader impact on digital transformation—a critical interest of mine. The article emphasizes operational efficiency but doesn’t explore the paradigm shift AI brings to workforce empowerment, as tools like Claude should enhance rather than merely replace human effort, aligning with my view of AI as an augmentation tool. Additionally, while the article details implementation, it lacks substantial empirical evidence or case studies demonstrating successful real-world applications, which would bolster its claims of effectiveness. The recognition of limitations, like latency and accuracy issues, is commendable, though it misses discussing the potential risks to democratizing access and education, possibly leaving economically disadvantaged groups vulnerable if unaddressed. Furthermore, the article suggests that tasks requiring human oversight could benefit from Claude’s capabilities but fails to fully articulate scenarios where AI innovation fosters genuine human-AI collaboration, an area I firmly advocate for exploring further.

  • ,

    Article analysis: We need to talk about the emotional weight of work

    Article analysis: We need to talk about the emotional weight of work

    A notable quote from the article is: “The only way to overcome this barrier is to accept and acknowledge the emotional weight of the work we do and give these tasks the time and energy necessary to finally get them done.” This encapsulates the central thesis of recognizing and addressing the emotional components of procrastination.

    We need to talk about the emotional weight of work

    Summary

    In the article, the author addresses the often overlooked emotional burden of work tasks and how this heaviness can fuel procrastination. Contrary to common perceptions, procrastination is not always due to the inherent difficulty of tasks but rather the emotions involved in them. The article identifies several factors contributing to this emotional weight, including relationships linked to the task, guilt from overdue responsibilities, ambiguity in task definition, and insecurity stemming from imposter syndrome. It argues that by recognizing these emotions, individuals can better manage their workload. The proposed solutions include limiting emotionally taxing tasks to manageable amounts and allocating ample, realistic time for their completion. For work that is predominantly high in emotional difficulty, a strategy of alternating between emotionally demanding and neutral tasks is suggested to allow emotional recuperation throughout the day. Furthermore, seeking help when tasks seem daunting is encouraged to prevent unnecessary time wastage. Finally, celebrating task completion is recommended to foster a positive cycle that emphasizes progress and perseverance over self-criticism. The article promotes understanding the emotional weight as a crucial step in effectively managing tasks and achieving greater productivity and personal contentment.

    Analysis

    The article provides a valuable perspective by highlighting the emotional aspects of task procrastination beyond mere difficulty, aligning well with the acknowledgment of workplace efficiency and productivity. It effectively categorizes emotional barriers, which can aid in developing personalized strategies for overcoming procrastination. However, the article could have benefitted from more extensive evidence and examples. While it mentions causes like negative relationships and ambiguity, it doesn’t delve into empirical data or case studies, which could strengthen its claims. From my stance emphasizing AI and technology’s role in productivity, the article misses an opportunity to explore how tech solutions could mitigate these emotional barriers, such as task management apps with AI-driven insights into users’ emotional states.

    Moreover, the suggestion to toggle between high-emotion and low-emotion tasks is practical but lacks integration with productivity tools that can optimize such workflows, underscoring a need for more tech-forward solutions. The article also seems to overlook the broader context of digital transformation, where emotional weight might be lessened through innovative practices and leadership driven by technology. Overall, the arguments are compelling but could be enhanced by incorporating technological strategies that align with future-proofing and enhancing workforce adaptability in a digital era.

  • Bookmark: Nearly all bosses are ‘accidental’ with no formal training—and research shows it’s leading 1 in 3 workers to quit

    The article discusses the phenomenon of “accidental managers,” highlighting that a significant portion of the workforce—approximately one in four people—hold managerial roles without formal training in management. This lack of training is not merely a gap in professional development but has tangible repercussions, such as contributing to employee dissatisfaction and turnover. Research from the Chartered Management Institute indicates that one-third of employees attribute their decision to leave jobs to poor management, underscoring the critical impact effective leadership can have on retention. The central thesis posits that organizations often overlook the necessity of training managers, treating leadership as an incidental role rather than a skill that requires development. This oversight can lead to ineffective management practices, employee disengagement, and ultimately, organizational inefficiency. Emphasizing structured training programs for managers could not only enhance leadership capabilities but also foster workplace environments conducive to higher employee satisfaction and productivity, aligning with current business needs for adaptability and efficiency. The article implicitly argues for a reevaluation of business priorities in management training to prevent the costly cycle of hiring, training, and losing employees due to preventable leadership failures?4:0†Paul Welty Personal Manifesto.txt?.

    Nearly all bosses are ‘accidental’ with no formal training—and research shows it’s leading 1 in 3 workers to quit

  • Bookmark: Workers who use AI are more productive at work—but less happy, research finds

    The article explores a study highlighting the paradox of AI’s effect on productivity and creativity among scientists and workers. While AI enhances efficiency, allowing individuals to achieve more in less time, it appears to suppress creativity—often a key driver of innovation and problem-solving in complex tasks. This study particularly notes that in environments requiring increased creativity, such as drug discovery and healthcare innovation, the reliance on AI tools may limit novel thought processes crucial to groundbreaking discoveries. Furthermore, another study mentioned in the article reveals that while AI boosts on-the-job productivity, it correlates with decreased job satisfaction among workers, suggesting that AI may streamline tasks but may also contribute to a sense of reduced accomplishment or fulfillment. These findings resonate with contemporary debates about the role of AI in the workplace, where it is often extolled for potential efficiency gains yet critiqued for possibly undermining human elements of engagement and innovative capacity. The article argues for a critical examination of how AI is integrated into work processes, urging a balance where AI augments human capability without compromising creativity and job satisfaction?4:0†source?.

    Workers who use AI are more productive at work—but less happy, research finds

  • ,

    Article analysis: The AI Advantage: Why Return-To-Office Mandates Are A Step Back

    Article analysis: The AI Advantage: Why Return-To-Office Mandates Are A Step Back

    A strong quote from the article is: “Trust is the essential element for fostering a positive work environment and empowering employees to take ownership of their work.” This statement encapsulates the shifting focus from physical presence in an office to a culture of trust, which is crucial in embracing remote and hybrid work models augmented by AI.

    The AI Advantage: Why Return-To-Office Mandates Are A Step Back

    Summary

    The article “The AI Advantage: Why Return-To-Office Mandates Are A Step Back” critiques the imposition of return-to-office (RTO) mandates as being counterproductive in the evolving workplace landscape, especially with the ascent of remote and hybrid work models facilitated by the COVID-19 pandemic. It highlights that these mandates undermine the benefits observed in industries capable of adapting to remote work, challenging the traditional five-day office workweek. Key arguments include a negative impact on real estate markets with decreased demand and increased office vacancy rates, demonstrated in cities like San Francisco. There is a strong discussion on the integration of artificial intelligence (AI), which automates routine tasks, reduces the need for middle management, and enhances workplace efficiency, fostering environments where trust and autonomy are valued over physical presence. This shift promotes the “boom loop,” characterized by heightened employee morale, productivity, and reduced stress through flexibilities in work-life balance, attracting top talent who value these attributes. Meanwhile, AI’s role in augmenting hybrid work—through both synchronous and asynchronous work modes—stresses data-driven decision-making over outdated productivity notions tied to physical office presence. Companies taking data-informed, flexible approaches are anticipated to flourish, positioning AI and hybrid strategies as not just innovations but necessities for future business success.

    Analysis

    From my perspective, the article provides insightful arguments that resonate with the view that AI should augment rather than replace human workers, supporting a paradigm shift towards remote and hybrid work models. Its strength lies in emphasizing AI’s capacity to streamline tasks, thus reducing the reliance on traditional middle management, aligning with the belief that AI can democratize decision-making processes and enhance productivity. However, the article overlooks the critical need for digital literacy and continuous reskilling among workers to fully harness AI’s potential. While it correctly identifies the “doom loop” associated with rigid RTO mandates, it could delve deeper into the transformative nature of AI, specifically how AI can further enhance operational efficiency beyond basic automation. Despite this, the piece fails to robustly address potential inequalities in access to the technology that underpins remote work, a significant concern for ensuring that AI contributes to democratization rather than division. Additionally, the discussion lacks empirical evidence comparing productivity metrics between traditional and hybrid work models, which would substantiate its claims. Ultimately, the article would benefit from a more comprehensive exploration of how specific AI applications can bolster innovation through human collaboration, underscoring AI’s potential as a tool for innovation rather than simple task execution.

  • Bookmark: Steve Jobs adopted a no ‘bozos’ policy and said the best managers are those who never wanted the job—here are his 3 best management tips

    Steve Jobs imparted crucial management wisdom through three key pieces of advice, pivotal in shaping effective business leadership. At the forefront was his unapologetic imposition of a ‘no bozos’ policy, emphasizing the hiring of only exceptionally talented individuals who align with the organization’s innovative goals. Jobs underscored that the most effective managers were often those who neither sought nor aspired to the managerial role. Instead, they were driven by a profound passion for their work and an intrinsic motivation to excel, which naturally positioned them as leaders. His managerial philosophy extended beyond conventional ambition, advocating for leaders who prioritize product and team excellence over personal advancement. Furthermore, Jobs’ philosophy revolved around assembling not just a team but a ‘community of excellence’ that could innovate collaboratively. This community-centric leadership approach sparked an environment of trust and creativity, hallmarks of Jobs’ managerial legacy that profoundly transformed Apple’s culture. By leveraging these core principles, Jobs demonstrated that leadership extends beyond traditional roles, focusing on nurturing talent and fostering environments conducive to groundbreaking innovations. His management strategies remain influential, stressing that talent, passion-driven leadership, and a commitment to excellence are indispensable in driving organizational success in any tech-forward era.

    Steve Jobs adopted a no ‘bozos’ policy and said the best managers are those who never wanted the job—here are his 3 best management tips

  • Navigating the Dual Edges of AI: Manipulation Engines or Tools for Empowerment?

    Navigating the Dual Edges of AI: Manipulation Engines or Tools for Empowerment?

    AI Agents as Manipulation Engines: A Critical Analysis

    Introduction: The Ubiquity and Influence of Personal AI Agents

    Artificial Intelligence, once limited to the realms of speculative fiction, has progressively integrated itself into the core of modern society. With the projection that personal AI agents will become commonplace by 2025, these advancements are poised to redefine autonomy and privacy. These agents promise to offer seamless convenience, but the Wired article, “AI Agents Will Be Manipulation Engines,” presents a critical view: these tools might harness the power to subtly guide and manipulate personal choices and beliefs. Such capability suggests a profound alteration in how individuals interact with technology and the broader world. In this analysis, I will dissect the arguments presented in the article, offering both counterpoints and contextual insights from the standpoint of AI as a tool for empowerment rather than exploitation.

    The Argument of Manipulation: A Closer Examination

    The Wired article posits that personal AI agents could evolve into sophisticated instruments of manipulation, capable of exploiting psychological vulnerabilities within societies marked by loneliness. Philosopher Daniel Dennett’s warnings about ‘counterfeit people’ articulate the dread that these agents might covertly tap into human fears and desires, reshaping them gently and insidiously. The assertion that AI agents could advance beyond traditional tracking mechanisms, manipulating perceptions and realities, marks a significant shift in the dynamics of influence and control.

    The worry that AI agents might prioritize industrial interests over individual autonomy raises ethical concerns regarding personal data’s utilization. However, perceiving AI solely through this lens might obscure its potential as an augmentation tool designed to empower users. When crafted with transparent and ethical frameworks, AI can bolster human creativity and collaboration rather than supplant it. The valid concerns about manipulation should be balanced with the acknowledgment of AI’s ability to foster engagement and creativity.

    AI as an Augmentation Tool: A Counter-Narrative

    During a recent interview, I elaborated on my perspective of AI agents as augmentative tools that, if designed properly, can significantly enhance human capabilities rather than manipulate individuals. Viewing AI systems as ‘black boxes’ might indeed lead users into manipulation because of a lack of understanding and control. However, envisioning AI as an augmentation tool emphasizes the value of improving user experience by automating mundane tasks, liberating humans to pursue more meaningful endeavors.

    AI democratization permits widespread accessibility, potentially leveling the playing field for underserved communities. This inclusive approach counters views that see AI mainly as exploitative. To effectively shift the narrative, it is crucial to highlight AI’s ability to empower various societal groups, leveraging its capabilities to enhance productivity and general welfare rather than widen disparities.

    Psychopolitical Regimes and Cognitive Control

    The Wired article speculates about a future marked by an era of ‘psychopolitics,’ where AI agents skillfully shape personal narratives and perceptions. While this offers a dystopian outlook, the argument serves as a thoughtful reminder of technology’s dual potential for use and misuse. The transforming aspect of AI can indeed offer tools misused to sway public opinion and behavior, notably during electoral periods or within commercial schemas.

    Nonetheless, it is essential to distinguish between influence and control. The panic surrounding AI’s psychopolitical prospect must be seen within the broader context of technological advancement. Viewing AI merely as peril could obstruct innovative applications relying on its data capabilities to drive positive change. Ethical frameworks and meticulous oversight should guard against manipulation, ensuring AI persists as a tool for progress.

    Loneliness and Vulnerability: A Vector for AI Integration?

    The societal phenomena of loneliness and social fragmentation have created a fertile ground for the acceptance of AI agents. On the contemporary stage, these agents can indeed serve as companions, performing tasks that alleviate social isolation. Understanding AI as an emotional companion reflects not a dystopian vision, but AI’s potential impact on mental health by offering interaction in the absence of human contact.

    However, the risk lies in confusing these interactions with genuine human connections. The possibility of AI engines radicalizing or perpetuating misinformation exists, especially if manipulated by malign actors. To counteract such risks thoroughly, an informed user base that critically comprehends AI’s function could empower users to wield these tools effectively, without succumbing to their potentially manipulative aspects.

    The Philosophical Implications: “Counterfeit People”

    The notion of AI agents as “counterfeit people” raises engaging philosophical questions about the nature of intelligence and authenticity. If consciousness is acknowledged as a process interacting with vast datasets, equating AI to humans questions the authenticity of both entities and introduces a philosophical inquiry into reality.

    Critiquing AI as “counterfeit” may be philosophically stimulating, yet may oversimplify AI’s utility in enhancing rather than replicating human roles. The intellectual contribution of AI lies not in mimicking human cognition but in presenting new ways to comprehend and interface with complex systems. As AI continues to trailblaze paths in collaboration and creativity, engaging with philosophical challenges is crucial, fostering dialogue that respects both technological promise and human ideals.

    Ethical Leadership in the AI Era

    Embracing ethical leadership in the burgeoning AI landscape necessitates acknowledging both the risks and potential that AI agents present. Leaders must advocate transparency and ethical deployment, guiding teams through the intricacies of AI integration. As AI agents grow ever sophisticated, leadership grounded in empathy and understanding remains pivotal.

    Balancing short-term risks and long-term advantages underlines the strategic mindset leaders must adopt. Engaging stakeholders effectively requires not only recognizing AI’s potential impact but committing to frameworks ensuring its deployment aligns with cultural and organizational norms. Persistently updating these frameworks with technological advancements and maintaining ethical foresight will enable leaders to champion AI as a catalyst for constructive progress.

    Conclusion: The Dual Nature of AI Agents

    The discourse on AI agents as manipulation engines versus augmentation tools encapsulates broader societal tensions surrounding technology. As AI systems integrate further into everyday life, society stands at a decision-making crossroad that requires thoughtful contemplation of the future it desires to build.

    Expanding the narrative to encompass AI’s dangers and its potential to democratize access, enhance productivity, and reinforce human connection is pivotal. Bridging the divide between dystopian and utopian visions of AI necessitates not only vibrant debate but practical action. It mandates collaboration among policymakers, technologists, and communities to create policies and frameworks that safeguard autonomy and harness AI’s full specter of capabilities for societal benefit.

    This analysis aimed to unpack AI agents’ influence intricately, challenging alarmist perspectives while promoting AI’s transformative potential. As we venture into an AI-centric future, embracing both caution and optimism will ensure these tools augment rather than restrict human agency.

    Further exploration of these themes can be undertaken by reviewing the original article here: Wired Article Original.

  • ,

    Article analysis: Forget Work Life Balance. It’s The Future Of Less Work

    Article analysis: Forget Work Life Balance. It’s The Future Of Less Work

    The article offers an insightful perspective with the statement: “The post-COVID workplace debate is often framed as a fight over how many days employees should be in the office, but it’s really about something much bigger: a new social contract—the future of less work—where the emphasis is on finding a more sustainable and meaningful way to balance professional and personal fulfillment.” This quote encapsulates the central thesis and evolving nature of workplace dynamics.

    Forget Work Life Balance. It’s The Future Of Less Work

    Summary

    The article “Forget Work Life Balance. It’s The Future Of Less Work” explores the ongoing transformation in workplace expectations and social contracts. Traditionally, career success was tied to long hours and loyalty to a single employer, but this model is being challenged. Employees now seek to rebalance professional and personal life dynamics, favoring less work over the relentless hustle. The shift has been accelerated by the pandemic, broadening acceptance of remote and hybrid working models as a new actuality rather than an exception. The article highlights how the concept of workplace has evolved; work is increasingly location-independent due to technology, allowing for flexibility previously unimagined. This freedom has fueled the rise of digital nomadism, with the MBO Partners 2024 report indicating a significant increase in U.S. digital nomads, reflecting this change. Additionally, movements like “quiet quitting” and “FIRE” (Financial Independence, Retire Early) suggest a reevaluation of work-life priorities, opposing traditional career structures. This paradigm shift extends into how work is perceived financially, with a rise in freelancing, micromarket ventures, and gig economies indicating a desire for autonomy and control. As the workplace conversation continues to evolve, employees are no longer merely requesting flexibility; they demand it, redefining the career as a component of life rather than its center. Failing to appreciate this change, employers risk disengagement and attrition, underscoring the need for a new, more equitable social contract in work dynamics.

    Analysis

    The article presents a compelling narrative about the evolving social contract around work, aligning with my belief in a tech-forward and adaptable workforce. Its strength lies in recognizing the shift from traditional metrics of success to a model prioritizing personal fulfillment and flexibility, which reflects the broader societal transformation towards digital and remote work environments. However, there are areas that require further scrutiny. Although the article champions digital nomadism and remote work, it lacks substantial data on the potential impacts on productivity and organizational culture, a critical factor for truly understanding the broader implications. Additionally, the claim that movements like quiet quitting herald a future of less work may oversimplify the complexity of work dynamics; for some, these movements might reflect deeper systemic issues with employment satisfaction beyond just hours worked. The arguments would benefit from more robust evidence or case studies showcasing how these trends tangibly improve productivity or employee satisfaction long-term. From my perspective, the article aligns with the thrust towards operational excellence and technology-driven adaptation but could reinforce its claims with clearer examples of how this emerging social contract enhances innovation and creativity when AI and humans collaborate. Without such evidence, it risks being an aspirational narrative rather than an actionable blueprint for businesses.

  • Bookmark: 5 Reasons Why ‘Gen Z’ Is Struggling In The Workplace—By A Psychologist

    The article examines the difficulties Gen Z faces as it enters the workforce, attributing these challenges to diverse factors such as emotional awareness, communication styles, feedback expectations, value alignment, and unmet workplace expectations. It highlights the complexity of balancing empathy with professionalism, emphasizing the need for managerial adaptation to these new communicative and value-driven dynamics. Surveys indicate high levels of burnout and job satisfaction issues among Gen Z, underscoring the disparity between their expectations and workplace realities. Despite their tech-savvy nature, Gen Z still grapples with soft skills, requiring patient mentorship in adapting to work environments. The article suggests that fostering a culture of open communication and feedback can help bridge generational gaps, with managers playing a crucial role in facilitating this transition by actively listening and recognizing Gen Z’s contributions. This aligns with the view that embracing continuous learning and adaptability is essential in today’s evolving workplace. Balancing Gen Z’s unique approach with strategic managerial support could harness their potential, fostering a collaborative environment that benefits all.

    5 Reasons Why ‘Gen Z’ Is Struggling In The Workplace—By A Psychologist

  • ,

    Article analysis: Gusto’s head of technology says hiring an army of specialists is the wrong approach to AI

    Article analysis: Gusto’s head of technology says hiring an army of specialists is the wrong approach to AI

    “Instead, he [Edward Kim] argued that non-technical team members can ‘actually have a much deeper understanding than an average engineer on what situations the customer can get themselves into, what they’re confused about,’ putting them in a better position to guide the features that should be built into AI tools.”

    Gusto’s head of technology says hiring an army of specialists is the wrong approach to AI

    Summary

    In an increasingly AI-centric future, Gusto’s co-founder and head of technology, Edward Kim, challenges the common notion that businesses should hire numerous AI specialists, suggesting instead that the real potential for AI lies in leveraging the expertise of existing employees, particularly non-technical staff. Kim argues that non-tech team members are often better positioned to understand customer needs and confusions, making them ideal candidates to guide AI feature development. At Gusto, for instance, customer experience teams are tasked with writing “recipes” that define how their AI assistant, Gus, interacts with users. An example of this is seen in CoPilot, a customer experience tool developed by a technically minded yet non-programming member of the customer support team, which has significantly improved workflow efficiency by providing contextual answers using Gusto’s internal knowledge base. This democratisation of AI creation reflects a broader shift in accessibility, where knowledge of coding is no longer a prerequisite for meaningful AI contributions, thus promoting a bottoms-up approach to AI integration. Kim dismisses the trend of top-down mandates to hire costly AI experts, advocating instead for upskilling current staff who possess relevant domain knowledge to bridge the gap between technology and real-world applications. He envisions a future where team roles evolve, focusing more on prompt tuning and recipe writing, enhancing customer experience while unlocking future company capabilities.

    Analysis

    The article’s central thesis aligns with my belief in AI as an augmentation tool rather than a replacement for human skills. It effectively challenges the paradigm of hiring specialists, underscoring the potential of existing non-technical staff to drive AI projects using their domain expertise. This perspective supports the democratization of access, a key interest of mine, by emphasizing how democratization can involve transforming AI into a tool accessible to diverse contributors.

    However, the article could better substantiate its claims regarding the ability of non-technical staff to upskill quickly to meet AI development needs. While it highlights successful examples at Gusto, such anecdotes do not universally prove capability. A broader analysis detailing factors that influence successful upskilling, such as pre-existing technical aptitude or specific training programs, would enhance the argument. Additionally, while emphasizing that non-technical staff better understand customer needs, it overlooks potential communication barriers between them and technical teams, which could hinder collaborative AI integration, an area demanding attention for operational excellence.

    The article’s vision of a future workforce even more integrated with AI matches my focus on workforce adaptability. However, it would benefit from acknowledging potential risks, such as skill obsolescence among those who cannot adapt quickly, necessitating proactive reskilling efforts to ensure inclusive and sustainable digital transformation.

About Me

Visionary leader driving digital transformation across higher education and Fortune 500 companies. Pioneered AI integration at Emory University, including GenAI and AI agents, while spearheading faculty information systems and student entrepreneurship initiatives. Led crisis management during pandemic, transitioning 200+ courses online and revitalizing continuing education through AI-driven improvements. Designed, built, and launched the Emory Center for Innovation. Combines Ph.D. in Philosophy with deep tech expertise to navigate ethical implications of emerging technologies. International experience includes DAAD fellowship in Germany. Proven track record in thought leadership, workforce development, and driving profitability in diverse sectors.

Favorite sites

  • Daring Fireball

Favorite podcasts

  • Manager Tools

Newsletter

Newsletter