torsdag 6 mars 2025

DDD dead man walking


Domain-Driven Design (DDD)  was a popular approach for designing complex software systems around 15-20 years ago, especially when building monoliths or web services. However, there are super valid arguments against using DDD, specially you work with microservice architecture and particularly when working with Spring Boot. Here are some points to consider:

1. Overhead and Complexity

   Argument: DDD introduces additional layers of abstraction that can increase the complexity of your microservice.

Impact: For microservices, this overhead is not justified. Spring Boot is often used to build lightweight, fast-to-develop services, and DDD can slow down development by requiring rigorous modeling and design upfront. Useless work, because in a microservice the important think is the spefic business domain the microservice has to own and the different contracts it has upstream against consumers of REST API but also other downstream consumers if your service needs to implement event sourcing or your microservice has to be an important component in your company's message driven architecture.

A simpler CRUD-based approach might suffice for straightforward microservices, reducing development time and cognitive load. For more central microservice you maybe implement Event Sourcing.

2. Learning Curve

Argument: DDD requires a deep understanding of its concepts and how to implement them effectively and it is knowledge about how to design, with right semantics etc. not to code.

Impact: Teams unfamiliar with DDD may struggle to apply it correctly, leading to poorly designed systems or wasted time. Spring Boot developers might prefer more straightforward patterns that align with their existing knowlegde

3. Misalignment with Microservice Granularity

Argument: DDD emphasizes bounded contexts, which might not always align with the granularity of microservices.

Impact: The use of DDD can lead to microservices that are too fine-grained, increasing operational complexity without providing significant business value.

Focus on business capabilities rather than strict DDD boundaries when defining microservices.

4. Over-Engineering

Argument: DDD encourages rich domain models, which can lead to over-engineering.

Impact: Not all microservices require complex domain logic. For example, a service that primarily handles data storage or integration with third-party APIs might not benefit from a full DDD approach.

5. Performance Considerations

Argument: DDD's emphasis on aggregates and consistency boundaries can lead to inefficient data access patterns.

Impact: In a microservices architecture, fetching large aggregates or enforcing transactional consistency across services can hurt performance. Spring Boot's simplicity and focus on lightweight services might clash with DDD's heavier design.

6. Tooling and Frameworks

Argument: Spring Boot provides excellent support for building RESTful services and integrating with databases, but it doesn't enforce or align with DDD principles.

Impact: Developers might need to write additional boilerplate code or use third-party libraries to implement DDD patterns, which can slow down development.

Leverage Spring Boot's strengths without forcing DDD where  it doesn't fit.

7. Iterative Development

Argument: DDD requires a deep understanding of the domain upfront, which can be challenging in agile environments where requirements evolve frequently.

Impact: In fast-paced projects, the time spent on domain modeling might delay delivery. Spring Boot's rapid development capabilities might be better suited to iterative, feedback-driven approaches.

8. Not All Domains Are Complex
 
Argument: DDD is most valuable in domains with complex business rules and logic.

Impact: For domains that are primarily data-driven or lack complex rules, DDD adds unnecessary complexity. Spring Boot's simplicity shines in these scenarios.

9. Distributed Systems Challenges
Argument: DDD doesn't inherently address the challenges of distributed systems, such as eventual consistency, service discovery, or fault tolerance.

Impact: In a microservices architecture, these concerns are critical, and DDD alone won't solve them. Spring Boot's ecosystem (e.g., Spring Cloud) provides tools to handle these challenges, but they might not align neatly with DDD.

10. Team Dynamics

Argument: DDD requires close collaboration between developers and domain experts.
 
Impact: If the team lacks access to domain experts or struggles with communication, DDD might not deliver its intended benefits. Spring Boot and microservice architecture allows teams to work more independently.

Conclusion
DDD was maybe a powerful tool for designing web services or complex monoliths, but it is totally unnecessary with microservices and Spring Boot. 


lördag 25 januari 2025

What is a REST API?

People mistakenly believe they are designing REST API:s but actually they design RPC API:s. 
The misconception has its root in the lack of theoretical knowledge about REST API and the RESTFul Principles. 
I don’t trust people that have a very rigid view about how a REST API should be implemented, claiming that their way to design is best practices on the Internet. 
The truth is that the majority of developers don’t know the basic principles of REST API. Majority of Microservices out there are RPC API:s not REST API:s. And that was a frustration Roy Fielding (the father of REST) already expressed in 2008.
 
There are few similarities:
  • Both use HTTP
  • Client-Server Model
 
There are crucial distinctions:

  • Resource-Oriented vs. Action-Oriented:
    • REST: Focuses on manipulating resources (e.g., "users", "products") through HTTP verbs (GET, POST, PUT, PATCH, DELETE) and respecting the semantics of the verbs and the characteristics, like the idempotency of some verbs.
    • RPC: Primarily focuses on executing remote procedures or functions (e.g., "getUserById", "createProduct").
  • Data Format:
    • REST: Often leverages standardized data formats like JSON or XML.
    • RPC: Can use various data formats, including custom ones.
  • Semantics of the URL:
    • REST: URLs typically represent resources (e.g., /users/123, /products).
    • RPC: URLs might reflect function names (e.g., /getUser, /createProduct).
  •  Flexibility:
    • REST: Generally more flexible and easier to evolve due to its focus on resources and standard HTTP verbs.
    • RPC: Can be more tightly coupled to specific implementations.
While both can use HTTP and operate within a client-server model, REST emphasizes a resource-oriented approach with standardized HTTP verbs, while RPC focuses on remote procedure calls, often with less emphasis on resource representation and the creation of semantic correctness.
Roy Fielding, the creator of REST, expressed significant frustration with how the term "REST API" is commonly misused. Here's a breakdown of his key concerns:
 
Misinterpretation of Core Principles:

  • Focus on HTTP: Many developers perceive REST solely as using HTTP, overlooking its architectural constraints and guiding principles. Fielding emphasizes that REST is not just about using HTTP verbs (GET, POST, PUT, PATCH, DELETE) but about adhering to a specific architectural style, understanding and implementing the semantics of the verbs within constraints like statelessness, cacheability, and uniform interface. Despite some API:s have idempotency as a requirement, they don’t use verbs that warrant idempotency and respect the semantics of the endpoints.
  • Ignoring HATEOAS: A core principle of REST is "Hypermedia as the Engine of Application State" (HATEOAS). This means that the API should provide links within its responses, allowing clients to discover and navigate resources dynamically. Many so-called "REST APIs" lack this crucial aspect, relying on pre-defined client-side logic.
  • Oversimplification and Misuse: "REST" as a Buzzword: The term "REST API" has become a buzzword, often used indiscriminately for any HTTP-based API, regardless of its actual adherence to REST principles. This dilutes the meaning and makes it difficult to distinguish truly RESTful APIs from others.
  • Focus on CRUD Operations: Many APIs labeled as "RESTful" primarily focus on CRUD operations (Create, Read, Update, Delete) on resources, neglecting the broader architectural considerations of REST. REST API don’t need to be CRUD API:s, an API can be Decision Making API or other implementation style dictated by the requirements, the important thing is the broader architectural considerations and semantics.

Impact on API Design and Evolution:

  • Limited Flexibility: Misinterpreting REST can lead to API designs that are less flexible and more tightly coupled to specific client implementations.
  • Hindered Innovation: By not adhering to true REST principles, developers miss out on the potential benefits of a more flexible and evolvable architecture.
Fielding laments that the term "REST API" has been widely misused and misunderstood, leading to suboptimal API designs and hindering the realization of the full potential of RESTful architectures. He emphasizes that true REST is more than just using HTTP; it's about adhering to a specific set of architectural constraints that promote flexibility, scalability, and evolvability.

Many developers misuse HTTP verbs, leading to inconsistent and unpredictable APIs. 

  • Common Misuses:
    •  Overusing POST: Many developers use POST for updates (instead of PUT or PATCH), even when the operation is idempotent. This violates the intended semantics of POST, which is typically for creating new resources, if the idempotency requirement is not included on creation.
    • For Idempotent Operations: Using POST for operations that should be idempotent (like updating a specific resource) can lead to unexpected side effects if the request is repeated.
    • Ignoring PUT vs. PATCH: Using POST for Partial Updates: Many use POST for partial updates (updating only specific fields of a resource), while PATCH is the correct verb for this purpose.
  • Misinterpreting Idempotency:
    • Confusing with Safety: Some developers mistakenly believe that all idempotent operations are safe (have no side effects). While all safe methods are idempotent, the reverse is not true. PUT and DELETE are idempotent but not safe.
    •  Consequences: Inconsistent APIs: Misusing verbs makes APIs harder to understand and use, leading to confusion and errors.
    • Unexpected Behavior: Incorrect verb usage can result in unexpected side effects, such as unintended data modifications or multiple creations of the same resource.
    • Limited Caching: Improper use of verbs can hinder caching mechanisms, as caches may not be able to effectively store and reuse responses.
    • Reduced Interoperability: APIs that don't adhere to HTTP verb semantics are less interoperable with other systems and tools that rely on these standards.
Best Practices? 
  • Use Verbs Correctly:
    • GET: Retrieve a resource.
    • POST: Create a new resource. (If and only if idempotency is not required)
    • PUT: Replace an entire resource. (also for creation if idempotency is required)
    • PATCH: Update specific parts of a resource.
    • DELETE: Remove a resource.
  • Understand Idempotency: Be aware of which verbs are idempotent and design your API accordingly,
  • Follow HTTP Standards: Adhere to the HTTP specification and best practices for verb usage.
By adhering to these principles, developers can create more robust, predictable, and maintainable API:s that are easier to use and integrate with other systems.
While POST is generally used for creating resources, there are situations where you need idempotency in a creation scenario. Here's how you can achieve that with PUT:

Idempotency

  • Concept: Introduce a unique identifier (like a UUID) within the request body if you rely on calling clients or implement a server-side identification for requests.
  • Implementation: 
    •  The server checks for this identifier.
    • If the identifier is encountered for the first time, the resource is created, and the identifier is associated with it.
    • If the identifier is already present, returns the existing resource (if the creation was successful in the past).
While POST is not inherently idempotent therefore have to be used ONLY when idempotency is not required, you have to use PUT when idempotency is required. PUT is idempotent according to HTTP Specification and it helps you to design a semantically right and more RESTFul API.
At the end Richmond Maturity Model explain very well what we are talking about:

torsdag 14 april 2016

Immutable Pojo

Annotation used to mark that a Pojo is immutable, in my case an immutable Pojo will have always private construtors, a builder and none setters, only getters.
       
package nap.pojo.annotations;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;


@Retention(RetentionPolicy.RUNTIME)  
@Target({ElementType.TYPE})
public @interface ImmutablePojo{

}

       
 
Now, here is my Immutable Pojo:
       

package nap.pojo;

import java.io.Serializable;

@ImmutablePojo
public class TestBean implements Serializable{

	/**
	 * 
	 */
	private static final long serialVersionUID = 4811204609263660419L;
	private final String name;
	private final Integer id;
	private final String code;
	
	private TestBean(){
		this.code = null;
		this.id = null;
		this.name = null;
	}
	private TestBean(Builder builder){
		this.code = builder.code;
		this.id = builder.id;
		this.name = builder.name;
	}
	
	public static Builder getBuilder(){
		
		return new Builder();
	}
	
	public String getName() {
		return name;
	}
	
	public Integer getId() {
		return id;
	}

	public String getCode() {
		return code;
	}
	
	public static class Builder{
		
		private String name;
		private Integer id;
		private String code;
		
		public Builder withId(Integer id){
			this.id = id;
			return this;
		}
		public Builder withName(String name){
			this.name = name;
			return this;
		}
		public Builder withCode(String code){
			this.code = code;
			return this;
		}
		
		public TestBean build(){
			return new TestBean(this);
		}
	}
}

  
 
And now I need an util that can test all my Immutable Pojo:
       



package nap.pojo.immutable.utils;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.fail;
import static org.mockito.Mockito.mock;

import java.beans.IntrospectionException;
import java.lang.reflect.Array;
import java.lang.reflect.Constructor;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Modifier;
import java.lang.reflect.Parameter;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;

import org.apache.log4j.Logger;
import org.reflections.Reflections;
import org.reflections.scanners.ResourcesScanner;
import org.reflections.scanners.SubTypesScanner;
import org.reflections.util.ClasspathHelper;
import org.reflections.util.ConfigurationBuilder;
import org.reflections.util.FilterBuilder;

import atg.pojo.immutable.annotations.ImmutablePojo;
import atg.pojo.immutable.exceptions.ImmutableConventionException;


public class ImmutablePojoTester {

	private static Logger LOGGER = Logger.getLogger(ImmutablePojoTester.class);

	public static  void testPackage(final String packageName, String... skips) throws Exception {
		// Tests Endorsement of JavaBeans Convention
		List classLoadersList = new LinkedList();
		classLoadersList.add(ClasspathHelper.contextClassLoader());
		classLoadersList.add(ClasspathHelper.staticClassLoader());
		Reflections reflections = new Reflections(new ConfigurationBuilder()
				.setScanners(new SubTypesScanner(
						false /* don't exclude Object.class */), new ResourcesScanner())
				.setUrls(ClasspathHelper.forClassLoader(classLoadersList.toArray(new ClassLoader[0])))
				.filterInputsBy(new FilterBuilder().include(FilterBuilder.prefix(packageName))));
		Set> clazzes = reflections.getSubTypesOf(Object.class);
		for (Class clazz : clazzes) {
			test(clazz);
		}
	}

	public static  void test(final Class clazz) throws ImmutableConventionException {
		if (clazz.isAnnotationPresent(ImmutablePojo.class)) {
			try {
				// Tests bean which is BuilderCreated
				testImmutablePojo(clazz, Arrays.asList(new String[] {}));
			} catch (Exception e) {
				LOGGER.error(e.getMessage(), e);
				fail("Exception occured!");
			}
		} else if (clazz.getName().contains("Builder")) {
			if (LOGGER.isInfoEnabled()) {
				LOGGER.info(String.format("Builder (%s) will not be tested", clazz.getCanonicalName()));
			}
		} else {
			throw new ImmutableConventionException(String.format(
					"Pojo %s is not annoted with ImmutablePojo and probably doesn't follow Immutable Pojo Convention!",
					clazz.getCanonicalName()));
		}
	}

	private static  void testImmutablePojo(Class clazz, List skips) {
		try {
			Method method = clazz.getMethod("getBuilder", (Class[]) null);
			method.setAccessible(true);
			Object builder = method.invoke(null, (Object[]) null);
			assertNotNull(builder);
			Method build = null;
			List fieldNames = new ArrayList();
			Map fieldToValue = new HashMap();
			for (Method m : builder.getClass().getMethods()) {

				if (m.getName().startsWith("with")) {
					String fieldName = m.getName().replace("with", "").toLowerCase();
					fieldNames.add(fieldName);
					Parameter[] params = m.getParameters();
					for (Parameter p : params) {
						Object value = buildValue(p.getType());
						fieldToValue.put(fieldName, value);
						m.invoke(builder, value);
					}
				}
				if (m.getName().startsWith("build")) {
					build = m;
				}
			}
			if (build != null) {

				Object bean = build.invoke(builder, (Object[]) null);
				assertNotNull(bean);
				for (String fieldName : fieldNames) {
					if (!skips.contains(fieldName)) {
						Field field = bean.getClass().getDeclaredField(fieldName);
						field.setAccessible(true);
						Object value = field.get(bean);
						assertNotNull(value);
						Object exp = fieldToValue.get(fieldName);
						assertEquals(value, exp);
					}
				}
			}

		} catch (Exception e) {
			e.printStackTrace();
		}
	}

	public static  void test(final Class clazz, final String... skipThese) throws IntrospectionException {
		testImmutablePojo(clazz, Arrays.asList(skipThese));
	}

	private static Object buildMockValue(Class clazz) {
		if (!Modifier.isFinal(clazz.getModifiers())) {
			// Call your mocking framework here
			return mock(clazz);
		} else {
			return null;
		}
	}

	private static Object buildValue(Class clazz) throws InstantiationException, IllegalAccessException,
			IllegalArgumentException, SecurityException, InvocationTargetException {
		// Try mocking framework first
		final Object mockedObject = buildMockValue(clazz);
		if (mockedObject != null) {
			return mockedObject;
		}
		final Constructor[] ctrs = clazz.getConstructors();
		for (Constructor ctr : ctrs) {
			if (ctr.getParameterTypes().length == 0) {
				return ctr.newInstance();
			}
		}
		if (clazz.isArray()) {
			return Array.newInstance(clazz.getComponentType(), 1);
		} else if (clazz == boolean.class || clazz == Boolean.class) {
			return true;
		} else if (clazz == int.class || clazz == Integer.class) {
			return 1;
		} else if (clazz == long.class || clazz == Long.class) {
			return 1L;
		} else if (clazz == double.class || clazz == Double.class) {
			return 1.0D;
		} else if (clazz == float.class || clazz == Float.class) {
			return 1.0F;
		} else if (clazz == char.class || clazz == Character.class) {
			return 'Y';
		} else if (clazz.isEnum()) {
			return clazz.getEnumConstants()[0];
		} else {
			return null; // for the compiler
		}
	}
}
  
 
Finally, my unit-test:
       
package nap.test.utils;

import java.beans.IntrospectionException;

import org.apache.log4j.BasicConfigurator;
import org.junit.Before;
import org.junit.Test;

import atg.pojo.immutable.exceptions.ImmutableConventionException;
import atg.test.domain.TestBean;
import atg.test.pojo.immutable.utils.ImmutablePojoTester;
import atg.test.pojo.utils.PojoTester;
import se.atg.service.racinginfo.api.calendar.CalendarDay;
import se.atg.service.racinginfo.api.game.Game;

/**
 * Unit test for simple App.
 */
public class DomainTest 
{
	@Before
	public void setUp(){
		BasicConfigurator.configure();
	}
   
          
    
    @Test
    public void testImmutablePojo() throws IntrospectionException, ImmutableConventionException
    {
    	ImmutablePojoTester.test(TestBean.class);	
    }
    
    @Test
    public void testImmutablePojoInPackage() throws Exception
    {
    	ImmutablePojoTester.testPackage("nap.test.domain");	
    }
}
  
 

måndag 9 februari 2015

Split Data in N parts

       
//How to split a data set in N parts? Assume N (in the code we call it ratio)=size of data set (S, in the code called size of data) diveded with max number of items in each subset (M, in the code called max) how to split the data set?

def max = 40
def data = [40,20,30,40,50,60,70,80,20,21,55,64,12,28,88,10,40,20,99,22,43,32,31,55,64,12,28,88,10,40,20,30,40,50,60,70,80,30,40,50,60,70,80,20,21,55,64,12,28,88,10,40,20,99,22,43,32,31,55,64,30,40,50,60,70,80,20,21,55,64,12,28,88,10,40,20,99,22,43,32,31,55,64,55,64,12,28,88,10,40,40,20,30,40,50,60,70,80,20,21,55,64,12,28,88,10,40,20,99,22,43,32,31,55,64,12,28,88,10,40,20,30,40,50,60,70,80,55,64,12,28,88,10,40,40,20,30,40,50,60,70,80,20,21,55,64,12,28,88,10,40,20,99,22,43,32,31,55,64,12,28,88,10,40,20,30,40,50,60,70,80,55,64,12,28,88,10,40,40,20,30,40,50,60,70,80,20,21,55,64,12,28,88,10,40,20,99,22,43,32,31,55,64,12,28,88,30,40,50,60,70,80,55,64,12,28,88,10,40,20,20,21,99,22,43,32,31,55,64,12,28,88,10,20,30,40,50,60,70,80,20,21,99,22,43,32,31,55,64,12,28,88,11,20,30,40,50,60,70,80,20,21,99,22,43,32,31,55,64,12,28,88,20,30,40,50,60,70,80,78] as ArrayList

Double ratio =data.size()/max
Double parts = Math.ceil(ratio) 

int upperBound = max;
int lowerBound = 0;



for (Integer j = new Integer(1); j<=parts;j++)
{
   if (lowerBound < data.size() && upperBound <=data.size())
   {
       println "printing elements: "+lowerBound+"-"+upperBound    
       def k = data.subList(lowerBound,upperBound)
       println "Size of elements: "+ k.size()
       lowerBound = upperBound;
       upperBound = (int) (max * (j+1)<=data.size() ? max*(j+1):data.size());
    }
}
       
 

Universal Date Util in Java

       

package se.nap;

import java.text.DateFormat;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Collection;
import java.util.Date;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
import java.util.TimeZone;

public class UniversalDateUtil
{
 private static final Map REGEX = null;
 public static final String DEFAULT_FORMAT = "yyyy-MM-dd'T'HH:mm:ssZZZ";
 private static final UniversalDateUtil instance = new UniversalDateUtil();

 private UniversalDateUtil()
 {
  // Test Regex here; http://www.regexplanet.com/advanced/java/index.html before inserting new format
  REGEX = new HashMap();
  REGEX.put("^\\d{8}$", "yyyyMMdd");
  REGEX.put("^\\d{1,2}-\\d{1,2}-\\d{4}$", "dd-MM-yyyy");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}$", "yyyy-MM-dd");
  REGEX.put("^\\d{1,2}/\\d{1,2}/\\d{4}$", "MM/dd/yyyy");
  REGEX.put("^\\d{4}/\\d{1,2}/\\d{1,2}$", "yyyy/MM/dd");
  REGEX.put("^\\d{1,2}\\s[a-zA-Z]{3}\\s\\d{4}$", "dd MMM yyyy");
  REGEX.put("^\\d{1,2}\\s[a-zA-Z]{4,}\\s\\d{4}$", "dd MMMM yyyy");
  REGEX.put("^\\d{12}$", "yyyyMMddHHmm");
  REGEX.put("^\\d{8}\\s\\d{4}$", "yyyyMMdd HHmm");
  REGEX.put("^\\d{1,2}-\\d{1,2}-\\d{4}\\s\\d{1,2}:\\d{2}$", "dd-MM-yyyy HH:mm");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}\\s\\d{1,2}:\\d{2}$", "yyyy-MM-dd HH:mm");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}T\\d{1,2}:\\d{2}:\\d{2}Z$", "yyyy-MM-dd'T'HH:mm:ss'Z'");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}'T'\\d{1,2}:\\d{2}:\\d{2}Z$", "yyyy-MM-dd'T'HH:mm:ss'Z'");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}T\\d{1,2}:\\d{2}:\\d{2}z$", "yyyy-MM-dd'T'HH:mm:ss'z'");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}T\\d{1,2}:\\d{2}:\\d{2}z$", "yyyy-MM-dd'T'HH:mm:ss'z'");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}T\\d{1,2}:\\d{2}:\\d{2}[A-Z]{0,3}$", "yyyy-MM-dd'T'HH:mm:ssz"); 
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}T\\d{1,2}:\\d{2}:\\d{2}[-|+]\\d{4}$","yyyy-MM-dd'T'HH:mm:ssZZZ");
  REGEX.put("^\\d{1,2}/\\d{1,2}/\\d{4}\\s\\d{1,2}:\\d{2}$", "MM/dd/yyyy HH:mm");
  REGEX.put("^\\d{4}/\\d{1,2}/\\d{1,2}\\s\\d{1,2}:\\d{2}$", "yyyy/MM/dd HH:mm");
  REGEX.put("^\\d{1,2}\\s[a-zA-Z]{3}\\s\\d{4}\\s\\d{1,2}:\\d{2}$", "dd MMM yyyy HH:mm");
  REGEX.put("^\\d{1,2}\\s[a-zA-Z]{4,}\\s\\d{4}\\s\\d{1,2}:\\d{2}$", "dd MMMM yyyy HH:mm");
  REGEX.put("^\\d{14}$", "yyyyMMddHHmmss");
  REGEX.put("^\\d{8}\\s\\d{6}$", "yyyyMMdd HHmmss");
  REGEX.put("^\\d{1,2}-\\d{1,2}-\\d{4}\\s\\d{1,2}:\\d{2}:\\d{2}$", "dd-MM-yyyy HH:mm:ss");
  REGEX.put("^\\d{4}-\\d{1,2}-\\d{1,2}\\s\\d{1,2}:\\d{2}:\\d{2}$", "yyyy-MM-dd HH:mm:ss");
  REGEX.put("^\\d{1,2}/\\d{1,2}/\\d{4}\\s\\d{1,2}:\\d{2}:\\d{2}$", "MM/dd/yyyy HH:mm:ss");
  REGEX.put("^\\d{4}/\\d{1,2}/\\d{1,2}\\s\\d{1,2}:\\d{2}:\\d{2}$", "yyyy/MM/dd HH:mm:ss");
  REGEX.put("^\\d{1,2}\\s[a-zA-Z]{3}\\s\\d{4}\\s\\d{1,2}:\\d{2}:\\d{2}$", "dd MMM yyyy HH:mm:ss");
  REGEX.put("^\\d{1,2}\\s[a-zA-Z]{4,}\\s\\d{4}\\s\\d{1,2}:\\d{2}:\\d{2}$", "dd MMMM yyyy HH:mm:ss");

 }

 public static String determineDateFormat(String dateString)
 {
  for (String regexp : REGEX.keySet())
  {
   if (dateString.matches(regexp))
   {
    return REGEX.get(regexp);
   }
  }
  
  return null; // Unknown format.
 }

 public static Collection getFormats()
 {
  return REGEX.values();
 }
 public static Date parseString(String dateTime) throws ParseException
 {
  return parseString(dateTime, null);
 }
 public static Date parseString(String dateTime, Locale locale) throws ParseException
 {
  String format = determineDateFormat(dateTime);
  if (format != null)
  {
   return getFormatter(format,locale).parse(dateTime);
  }
  return null;
 }
 public static String toString(final Date date)
 {
  return toString(date, DEFAULT_FORMAT, null, null);
 }
 public static String toString(final Date date, Locale locale)
 {
  return toString(date, DEFAULT_FORMAT, locale, null);
 }
 
 public static String toString(final Date date, final String format, Locale locale, final String timezone)
 {
  
  DateFormat formatter =  getFormatter(format, locale);
  if(timezone!=null)
  {
   final TimeZone tz = TimeZone.getTimeZone(timezone);
   formatter.setTimeZone(tz);
  }
 
  return formatter.format(date);

 }

 private static DateFormat getFormatter(String format, Locale locale)
 {
  if(locale==null)
  {
   locale = Locale.getDefault();
  }
  return instance.new ConcurrentDateFormatAccess(format, locale).dateformat.get();
 }

  
 class ConcurrentDateFormatAccess
 {
  private ThreadLocal dateformat;
  
  public ConcurrentDateFormatAccess(final String pattern, final Locale locale)
  {
   dateformat = new ThreadLocal()
   {
    @Override
    public DateFormat get()
    {
     return super.get();
    }

    @Override
    protected DateFormat initialValue()
    {
     return new SimpleDateFormat(pattern, locale);
    }

    @Override
    public void remove()
    {
     super.remove();
    }

    @Override
    public void set(DateFormat value)
    {
     super.set(value);
    }
   };
  }
 }
}

       
 

tisdag 14 oktober 2014

RSA Example...

       

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.PrivateKey;
import java.security.PublicKey;

import javax.crypto.Cipher;


public class EncryptionUtil {

  /**
   * String to hold name of the encryption algorithm.
   */
  public static final String ALGORITHM = "RSA";
  private static final int MAX_LENGTH = 16384;

  /**
   * String to hold the name of the private key file.
   */
  public static final String PRIVATE_KEY_FILE = "C:/keys/private.key";

  /**
   * String to hold name of the public key file.
   */
  public static final String PUBLIC_KEY_FILE = "C:/keys/public.key";

  /**
   * Generate key which contains a pair of private and public key using 16384
   * bytes. Store the set of keys in Prvate.key and Public.key files.
   * 
   * @throws NoSuchAlgorithmException
   * @throws IOException
   * @throws FileNotFoundException
   */
  public static void generateKey() {
    try {
      final KeyPairGenerator keyGen = KeyPairGenerator.getInstance(ALGORITHM);
      //MAXIMUM KEY LENGTH
      keyGen.initialize(MAX_LEGTH);
      final KeyPair key = keyGen.generateKeyPair();

      File privateKeyFile = new File(PRIVATE_KEY_FILE);
      File publicKeyFile = new File(PUBLIC_KEY_FILE);

      // Create files to store public and private key
      if (privateKeyFile.getParentFile() != null) {
        privateKeyFile.getParentFile().mkdirs();
      }
      privateKeyFile.createNewFile();

      if (publicKeyFile.getParentFile() != null) {
        publicKeyFile.getParentFile().mkdirs();
      }
      publicKeyFile.createNewFile();

      // Saving the Public key in a file
      ObjectOutputStream publicKeyOS = new ObjectOutputStream(
          new FileOutputStream(publicKeyFile));
      publicKeyOS.writeObject(key.getPublic());
      publicKeyOS.close();

      // Saving the Private key in a file
      ObjectOutputStream privateKeyOS = new ObjectOutputStream(
          new FileOutputStream(privateKeyFile));
      privateKeyOS.writeObject(key.getPrivate());
      privateKeyOS.close();
    } catch (Exception e) {
      e.printStackTrace();
    }

  }

  /**
   * The method checks if the pair of public and private key has been generated.
   * 
   * @return flag indicating if the pair of keys were generated.
   */
  public static boolean areKeysPresent() {

    File privateKey = new File(PRIVATE_KEY_FILE);
    File publicKey = new File(PUBLIC_KEY_FILE);

    if (privateKey.exists() && publicKey.exists()) {
      return true;
    }
    return false;
  }

  /**
   * Encrypt the plain text using public key.
   * 
   * @param text
   *          : original plain text
   * @param key
   *          :The public key
   * @return Encrypted text
   * @throws java.lang.Exception
   */
  public static byte[] encrypt(String text, PublicKey key) {
    byte[] cipherText = null;
    try {
      // get an RSA cipher object and print the provider
      final Cipher cipher = Cipher.getInstance(ALGORITHM);
      // encrypt the plain text using the public key
      cipher.init(Cipher.ENCRYPT_MODE, key);
      cipherText = cipher.doFinal(text.getBytes());
    } catch (Exception e) {
      e.printStackTrace();
    }
    return cipherText;
  }

  /**
   * Decrypt text using private key.
   * 
   * @param text
   *          :encrypted text
   * @param key
   *          :The private key
   * @return plain text
   * @throws java.lang.Exception
   */
  public static String decrypt(byte[] text, PrivateKey key) {
    byte[] dectyptedText = null;
    try {
      // get an RSA cipher object and print the provider
      final Cipher cipher = Cipher.getInstance(ALGORITHM);

      // decrypt the text using the private key
      cipher.init(Cipher.DECRYPT_MODE, key);
      dectyptedText = cipher.doFinal(text);

    } catch (Exception ex) {
      ex.printStackTrace();
    }

    return new String(dectyptedText);
  }

}



    try {

      // Check if the pair of keys are present else generate those.
      if (!EncryptionUtil.areKeysPresent()) {
        // Method generates a pair of keys using the RSA algorithm and stores it
        // in their respective files
       EncryptionUtil.generateKey();
      }

      final String originalText = "In practice, we need to store the public and private keys somewhere. Typically, the private key will be placed on our server, and the public key distributed to clients. To store the key, we simply need to pull out the modulus and the public and private exponents, then write these numbers to some file (or put in whatever convenient place).";
      ObjectInputStream inputStream = null;

      // Encrypt the string using the public key
      inputStream = new ObjectInputStream(new FileInputStream(EncryptionUtil.PUBLIC_KEY_FILE));
      final PublicKey publicKey = (PublicKey) inputStream.readObject();
      final byte[] cipherText = EncryptionUtil.encrypt(originalText, publicKey);

      // Decrypt the cipher text using the private key.
      inputStream = new ObjectInputStream(new FileInputStream(EncryptionUtil.PRIVATE_KEY_FILE));
      final PrivateKey privateKey = (PrivateKey) inputStream.readObject();
      final String plainText = EncryptionUtil.decrypt(cipherText, privateKey);

      // Printing the Original, Encrypted and Decrypted Text
      System.out.println("Original: " + originalText);
      System.out.println("Encrypted: " +cipherText.toString());
      System.out.println("Decrypted: " + plainText);

    } catch (Exception e) {
      e.printStackTrace();
    }

       
 

torsdag 18 september 2014

Hazelcast and Groovy

       

@Grapes(
        @Grab(group='com.hazelcast', module='hazelcast-all', version='3.2.5')
)
import com.hazelcast.core.Hazelcast
import com.hazelcast.core.HazelcastInstance
import com.hazelcast.query.SqlPredicate

import java.util.Map
import java.util.Queue

//Bean to store
class Customer implements Serializable
{
    static final long serialVersionUID = 423248904328809L;
    def name;
    //numeric values have to be declared with right type here
    Integer internalId;
    def birthDate;
}


HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance()
Map customers = hazelcastInstance.getMap( "customers" )
customers.put(1, new Customer(internalId:10001, name:'Luis Dinkel', birthDate:'1976-01-23'))
customers.put(2, new Customer(internalId:10111, name:'Logan Askim', birthDate:'1971-11-13'))
 
 def cust = customers.values(new SqlPredicate("internalId>10000 AND name LIKE '%Lui%'"));
 println(cust.name)


hazelcastInstance.shutdown()