Home » Research at John Jay
Category Archives: Research at John Jay
During the chaotic years of the Trump Administration, the United States experienced a rise in hate crimes. This increase has been confirmed by FBI data collection, media reporting, and independent scholarship. According to Dr. Frank Pezzella, an Associate Professor of Criminal Justice at John Jay College and a scholar of hate crimes, four out of the past five years, from 2015 to 2019, have seen consecutive increases in hate crime offending in this country, something he says is new. Nine of the ten largest American cities had the most dramatic increases in hate crimes – including New York City.
Hate crimes, or bias crimes, are strictly defined by the FBI. The organization sets out 14 indicators that must be present for a criminal offense to be classified as a hate or bias crime, that provide objective evidence that the crime was motivated by bias. But according to Dr. Pezzella, the evidence to meet those criteria isn’t always clear. Not every hate crime is as flagrant as the Pulse nightclub shooting in 2016 or the 2018 attack on Pittsburgh’s Tree of Life Synagogue. To establish a hate crime was committed, first responding police officers must look for evidence of bias motivation – what Pezzella calls an “elevated mens rea” requirement. But bias can only be committed against legally protected categories, like race and ethnicity, sexual or gender orientation, disability, or religion, which vary from state to state. And the additional paperwork and procedural requirements that come with classifying an incident as a hate crime are, in his words, disincentivizing police reporting.
Undercounting Hate Crimes
The result of these complications is rampant underreporting. In his new book, The Measurement of Hate Crimes in America, Dr. Pezzella looks at the reasons why hate crimes are so undercounted in the United States, and proposes some solutions for what law enforcement and policymakers can do to correct the issue. Since the enactment of the federal Hate Crimes Statistics Act in 1990, which required the Attorney General to collect data about hate crimes, the FBI has been fulfilling this mandate in the form of the Hate Crime Statistics Program, published annually as part of the Uniform Crime Report. According to Dr. Pezzella, since 1990 the UCR has reported an average of roughly 8,000 hate crimes per year; but victims, he says, report around 250,000 hate crimes per year. He attributes this substantial gap to a variety of factors including the evidentiary and procedural barriers noted above. In addition, only about 100,000 of these victimizations are ever reported to the police in the first place. And when victims do report, police departments are under no legal requirement to pass their findings on to the FBI.
“Of the roughly 18,500 police departments, only maybe 75% participate in the Uniform Crime Report hate crime reporting program – note that it is voluntary,” says Pezzella. “So we don’t even know about hate crimes in 25% of precincts. And of the participating 75%, roughly 90% report zero hate crimes every year. So one of the reasons we wrote the book is that, either we don’t have hate crimes the way we think we do, or we have a systemic reporting problem.” It’s obvious which he believes is true.
The consequences of underreporting hate crimes are severe, Dr. Pezzella says. “To the extent that we underreport both the type and extent of victimization, it really does put a specific policy issue in front of us. We need to know who’s being affected, how they’re being affected, and the extent of the effect, in order to fashion remedies.” The only way to target treatment and services for the most vulnerable and likely victims is through accurate reporting.
In order to remedy undercounting and better target policy, Dr. Pezzella presents a number of recommendations in The Measurement of Hate Crimes in America. He calls for changes to take place within police departments, at the level of state and local politics, and in the criminal legal system. First, he suggests that every precinct have a written and clearly posted hate crime policy, and that every officer be trained to understand the rules for identifying bias crimes and the statutes governing them in their particular state. He would also like to see greater police-community engagement on this issue, with better tracking of non-criminal bias incidents – like seeing a swastika or other racist tag in the neighborhood – which Pezzella says often lead to violent bias crimes. He would especially like to see hate crime reporting made mandatory, with penalties or audits following a departmental report of zero bias crimes in a year.
Stepping out of police departments, Dr. Pezzella also calls for greater engagement from state and local politicians, who after all control the purse strings as well as set state legislation, but who are often hesitant to call attention to a problem with hate crimes in their district. Finally, he wants prosecutors’ offices to commit to seeking hate crime convictions, rather than settling for the easier task of convicting an offender for non-bias equivalents. With every actor across the board invested in tackling hate crimes and being transparent and proactive about applying best practices, offenders are put on notice that the community, including police, won’t allow these harmful crimes to continue.
Dr. Pezzella has been studying hate crimes since his graduate school years at SUNY-Albany, but he doesn’t feel he’s reached the end of this line of research. Going forward, he is interested in studying the deleterious and vicarious effects hate crimes can have on the victims’ communities. Because bias-motivated offenders target victims based on what they are rather than what they do, Dr. Pezzella says, there is a sense that anyone could become the next victim. This impersonal threat undermines societal ideals of trust and equality, and can even affect property values, as whole groups feel unsafe in certain areas and may be forced to relocate. Pezzella also mentions the psychological and emotional impacts of feeling under threat for simply being who and what you are. “When a victim goes home and says they were a victim of a hate crime, in what way does it impact the quality of life or sense of safety for secondary victims [i.e., the victim’s community]?” he asks. “What do they do? While we understand the direct impact, we know less about this vicarious impact, and how far it extends beyond the primary victim.”
He also has his eye on current events, especially the rise of domestic terrorism in the United States. Dr. Pezzella is concerned about the growing number of organized hate groups in recent years, and how emboldened they have been by rhetoric from the top levels of government. While many mass shootings have been categorized as domestic terrorism, Pezzella also sees evidence of bias that might categorize these events as hate crimes. If they are being left out of crucial counts that help to allocate resources and fight back against hate in this country, he wants to know.
Dr. Frank Pezzella is an Associate Professor of Criminal Justice at John Jay College. His primary research focus is on the causes, correlates, and consequences of hate crimes victimizations. He also conducts research on issues that relate to race, crime and justice. In addition to his most recent book, he is also the author of Hate Crime Statutes: A Public Policy and Law Enforcement Dilemma, as well as numerous peer-reviewed articles.
The outbreak of COVID-19 has accelerated a number of existing trends in the United States; along with giving a big boost to remote work and the digital economy, and reinforcing existing socioeconomic inequality, 2020 has also seen the trend of movement from big cities to smaller ones pick up. Whether because larger cities are too expensive or because COVID-19 made them feel not just dense but claustrophobic, residents have reconsidered their environments. While big cities like New York and San Francisco have seen their populations decline over the last five years, some smaller cities – with populations in the tens of thousands rather than the millions – have been seeing an upswing.
Dr. Richard Ocejo, a John Jay professor, sociologist and author, is interested in what it looks like when newcomers arrive in small cities. He’s using Newburgh, New York, a city of about 30,000 in the middle of the Hudson Valley, as a case study, spending time with new and old residents to learn what gentrification looks like in a smaller city. “Newburgh was totally abandoned,” says Ocejo. “Capital had left it, investment had left it, it was just a place to warehouse the poor and struggling. Until New York City became too expensive, then all of a sudden, small, affordable, historic places like Newburgh become valuable again, to a group of people who are looking for these urban lifestyles.”
Gentrifying a Small City
Ocejo sees the characteristics of small-city gentrifiers as distinct from those who have traditionally moved into gentrifying neighborhoods in New York City, like on the Lower East Side or in Brooklyn. People moving to small cities from places like New York are often middle class, mid-career professionals, who are looking to buy property more affordably while still maintaining the lifestyles and habits they developed in the big city. Over the course of several years of field work and interviews, Ocejo has pinpointed some common threads in the narratives Newburgh’s newest residents use to understand their actions.
“They recognize that the reason they left [New York City] was because of being priced out. But when they get to Newburgh, the understanding of what it’s like to not be able to afford a place any more, of having to leave one’s home as a consequence of these larger forces beyond your control, doesn’t resonate in how they understand gentrification as they are perpetuating it in this small city,” says Ocejo. “They don’t see what they’re doing there as gentrifying that will cause this sort of harm that could make somebody leave their home as they had to do. Instead they say, we’ll just do it better.”
Generally, Newburgh’s gentrifiers are opposed to harmful development by “slumlords” or “bad actors;” in contrast, they perceive themselves as providing employment and adding to the tax base. But Ocejo hasn’t seen concrete evidence to back up their narratives. “I don’t know many examples of what we can call a successful gentrification, at least not at any kind of scale,” says Ocejo. “I can’t think of any examples of an equitable integration where there aren’t tensions and conflicts that take place.”
Reckoning with Racism
Ocejo says that some of the challenges he sees playing out in Newburgh are tied into structural racism and the failure of newcomers to acknowledge that they are recreating harmful racial and economic dynamics in Newburgh that caused displacement in New York City. While he observed Newburgh’s newcomers participating in Black Lives Matter protests and marches, he says that the leap to understanding the racist structures that are tied up in gentrification is rarely made. “We don’t talk about gentrification as a racial process,” Ocejo says, “but it is. It’s this extraction of value from racialized spaces, non-white spaces that are taken advantage of through these processes. And that’s not discussed at all.” He says gentrifiers’ inability or unwillingness to confront these issues is exacerbating a key inequality at the heart of the process.
Ocejo does clarify that, although on the whole gentrified spaces tend to end up segregated socially and culturally, there are positives associated with the process. Smaller cities are crying out for even a fraction of the investment New York City has received and, done correctly, municipal revitalization can make a real difference to disadvantaged communities. And in interviews with existing Newburgh residents, he has generally heard people react positively to commercial development in their neighborhoods. However, they aren’t necessarily sure the changes will add up to much real change in their own lives.
“Gentrification is a consequence of much larger forces that are beyond anybody’s control,” says Ocejo. Newburgh’s population of gentrifiers are responding to market forces that are making New York City a difficult place to live long-term without making significant sacrifices or acquiring millions of dollars. But at the end of the day, some groups have the means to make choices about where they will live and whether they will stay or go, while others are unable to make the same choices. It will take structural, policy-based change to make gentrifying urban neighborhoods, and migration in general, more equitable.
Dr. Ocejo has published three papers related to his work in Newburgh and has two additional papers under review. He is also working on the manuscript of a book that will bring together all of his work on this project; he expects it to come out in 2022 or 2023.
Dr. Richard Ocejo is an Associate Professor of Sociology at John Jay College, and the Director of the MA Program in International Migration Studies at the Graduate Center of CUNY. His research, which has been published in a variety of journals including Journal of Urban Affairs, Sociological Perspectives, and more, focuses on cities, culture and work. He uses primarily qualitative methods in his scholarship. Dr. Ocejo is the author of two books: Masters of Craft: Old Jobs in the New Urban Economy (2017) – on the transformation of manual labor occupations like butchering and bartending into elite occupations – and Upscaling Downtown: From Bowery Saloons to Cocktail Bars in New York City (2014) – about the influence of commercial operations on gentrification and community institutions in downtown Manhattan.
Have you ever been faced with a photo grid and asked to click on every traffic light to prove you weren’t a robot before your could access your email or bank? A recent proposal by Dr. Muath Obaidat, an Assistant Professor in John Jay College’s Department of Mathematics and Computer Science, could prevent you from having to go through that ever again.
Along with co-authors including his student Joseph Brown (a 2020 graduate who earlier this year was awarded John Jay’s Ruth S. Lefkowitz Mathematics Prize), Dr. Obaidat makes the case for a new way of authenticating user information that would make logging into websites more secure without overcomplicating the system. He calls it “a step forward” both technically and logistically, as the proposed authentication system is both technically more secure and easier to deploy commercially than previous proposals. So while Obaidat’s research may seem complicated, the solution he proposes in “A Hybrid Dynamic Encryption Scheme for Multi-Factor Verification: A Novel Paradigm for Remote Authentication” (Sensors, July 2020) is not just theoretical.
(To read the full text of the article for free, visit https://www.mdpi.com/1424-8220/20/15/4212/htm)
Read on for a Q&A with Dr. Muath Obaidat:
Can you describe the most common risks of the typical username/password authentication model most of us are using today?
The most common risk in current authentication models is the lack of presentation of actual proof of identity, especially during communications. Since the majority of websites use static usernames and passwords that do not change between sessions, if an attacker can get ahold of a login — whether by guessing or through more technical means — there is no further mechanism or nuance in design to actually stop them from using stolen data to imitate a user. While 2FA (Two Factor Authentication) has risen in popularity as a mitigation for this problem, both published papers from the National Institute of Standards and Technology (NIST) as well as high profile public hacks have shown this to be insufficient by itself, because of attacks which focus on manipulating or stealing data rather than simply brute-forcing (working through all possible combinations through trial-and-error to crack a passcode).
Can you explain how your proposed method works to authenticate a session?
The simplest way to explain how this form of authentication works is to imagine you had a key split into two halves; the client has a half, and the server has a half. But instead of just sending the half of the key you have, you’re sending the blueprint for said key half, which can only be reconstructed given the other half. This blueprint changes slightly each time you log in, but is still derivative of the other “whole” key.
Only two people have the respective halves: the client and the server. These halves are derivative of data which is itself derived from an original input. Thus, as long as you can produce something from the front-end that creates one input, even though this input is never sent, it can be integrated with this system. Think of that as the “mold” from which the key is derived, and then the blueprint is shifted on both ends according to the original mold.
How does your proposed scheme differ from others that have been in use or proposed previously?
What sets it apart is both the flexibility of the design as well as the range of problems it attempts to fix at one time. Many other schemes we studied were focused on fixing one problem: typically [they focused on] brute-forcing, which manifested in the form of padding “front-end” or “back-end” parts of a scheme without giving much thought to the actual transmission of data itself. Our scheme, on the other hand, is focused on protecting that transmitted data, while also being sure not to introduce additional weaknesses on either end of the communication.
Another big issue we often ran into with other schemes is design flexibility; many were either unrealistic to implement en masse, or were so specific that they pigeonholed themselves into a scenario where they could not be combined with other communication systems or improvements to other architectural traits. Our scheme is flexible in terms of architectural integration — for example, it uses the same simple Client-Server framework without introducing third parties or other nodes — and the overall design is both simplistic in terms of implementation and highly adaptable.
What is it that has prevented many newly-proposed authentication schemes from being implemented more broadly?
While it depends on the scheme in question, there are typically three factors that are preventative to implementation: user accessibility, deployment complications, and degree of benefit. The first isn’t really technical, but relates more to consumer factors. Many schemes simply are not widely implementable on a consumer level; not only because of aspects such as speed, but also because of logistics. Having a user go through a complicated process each time they want to log into a website isn’t very practical, especially if you’re selling a product where convenience is a factor, hence why some schemes don’t catch on despite being technically sound.
Deployment complications, on the other hand, would be related to things such as how to replace current infrastructure with new infrastructure; many schemes significantly stagger architectures or are high specific and complex to actually deploy. These complications act as a deterrent to those who may want to implement them. Lastly, degree of benefit is a big factor too. Given how ubiquitous current paradigms are, simply improving one aspect in exchange for the implementation of a widely different system is a very big ask. Implementation takes time, as does adoption on a wide scale, so unless the benefit is [significant enough to merit departing from] current paradigms, it’s unlikely many would want to explore “unproven” adoptions.
How would a new authentication method go from being theoretical to being widely adopted? In other words, by what process is this type of new technology adopted, and who is responsible for its uptake?
That’s a good question, and I do not think there is a singular answer unfortunately. Especially because of the decentralization of the internet, it’s hard to give a specific answer on what this would look like in practice. As the internet has been more consolidated under specific companies, I suppose one answer to this would be that bigger companies would have to take an interest in implementation and take action themselves to create a ripple effect. This is distinct from the past, when collective normalization of technology was bottom-up because of more decentralized standards.
Dr. Muath Obaidat is an Assistant Professor of Computer Science and Information Security at John Jay College of Criminal Justice of the City University of New York and a member of the Center for Cybercrime Studies, Graduate Faculty in the Master of Science Digital Forensics and Cyber Security program and Doctoral faculty of the Computer Science Department at the Graduate School and University Center of CUNY.
He has numerous scientific article publications in journals and respected conference proceedings. His research interests lie in the area of digital forensics, ubiquitous Internet of Things (IoT) security and privacy. His recent research crosscuts the areas wireless network protocols, cloud computing and security.
There is no question that the fashion industry causes great harm to the environment. The industry’s faddish nature, combined with the overproduction of low-cost, low-quality pieces, is designed to encourage overconsumption. Production of fast fashion garments eats up precious resources, like clean water and old-growth forests, and discarded clothing can sit in landfills for hundreds of years, thanks to synthetic materials used in construction.
According to scholars Monique Sosnowski—a Ph.D. candidate in criminal justice at the CUNY Graduate Center—and John Jay Assistant Professor of Criminal Justice Dr. Gohar Petrossian, pollution is not the fashion industry’s only crime. In a new article, they investigated what species were being utilized for the fashion industry, which is worth over $100 billion globally, in order to better understand the damage the industry causes to wildlife and wild places.
Sosnowski and Petrossian looked at items imported by the luxury fashion industry and seized at U.S. borders by regulatory agencies between 2003 and 2013. Their study found that, during that decade, more than 5,600 items incorporating elements illegally derived from protected animal species were seized. The most common wildlife product was reptile skin—from monitor lizards, pythons, and alligators, for the most part—and 58% of confiscated items came from wild-caught species. The authors also found that around 75% of seizures were of products coming from just six countries: Italy, France, Switzerland, Singapore, China and Hong Kong. The heavy involvement of the European countries was unexpected, according to Dr. Petrossian, because they are key players in fashion design and production but “don’t generally come up in broader discussions on wildlife trafficking.”
THE SCIENCE OF WILDLIFE CRIME
The paper applied “crime science, a body of criminological theories that focus on the crime event rather than ‘criminal dispositions,’ to understand and explain crime. The overarching assumption is that crime is an opportunity, and it is highly concentrated in time, as well as across place, among offenders, and victims,” says Dr. Petrossian. Their scientific approach enabled the authors to analyze patterns and concentrations in wildlife crime, which Sosnowski notes is among the four most profitable illegal trades.
“We are currently living in an era that has been coined the ‘sixth mass extinction,’” she says. “It is crucial that we understand the impact that humans are having on wildlife, from habitat loss to the removal of species from global environments. Fashion is one of the major industries consuming wildlife products.”
A background in wildlife conservation, including unique experiences like responding to poaching incidents in Botswana and rehabilitating trafficked cheetahs in Namibia, led Monique Sosnowski to a Ph.D. in criminology; she wanted to move beyond a more traditional conservation-informed approach to address what she’d seen in the field. Working with Dr. Petrossian on a series of studies applying crime science to wildlife crimes has given her a broader view of the effects of wildlife-related crime on global ecosystems.
CREATING SOLUTIONS, SAVING WILDLIFE
Why is it important to understand what species are most commonly used in luxury fashion products, and where they are coming from? A study like this one provides information about trends that policymakers can use to strengthen or focus enforcement and inform better understanding of the issues. Sosnowski calls this “the key to devising more effective prevention policies.”
Currently, global regulation of the trade in wildlife products, including leather, fur, and reptile skin that come from species both protected and not, is the province of the Convention on International Trade in Endangered Species (CITES); this treaty aims to ensure that international trade in wild animals and plants does not threaten their survival. But the treaty is limited in scope.
“Given the prevalence of exotic leather and fur in fashion, we believe CITES and other regulatory bodies should enact policies on its use and sustainability in order to protect wild populations, the welfare of farmed and bred populations, and the sustainability of the fashion industry,” Sosnowski says.
Consumers also have a role to play. “We are all led to believe that products found on the shelves are legal, but as this study has demonstrated, that isn’t always the case. Consumers of these products are the ones who have the power to change the behaviors of a $100 billion industry. We need to ask questions about where our products were sourced, and respond accordingly.”
Summarized from EcoHealth, Luxury Fashion Wildlife Contraband in the USA, by Monique C. Sosnowski (John Jay College, City University of New York) and Gohar A. Petrossian (John Jay College, City University of New York). Copyright 2020 EcoHealth Alliance.