-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathcontent.json
More file actions
1 lines (1 loc) · 409 KB
/
content.json
File metadata and controls
1 lines (1 loc) · 409 KB
1
{"meta":{"title":"Zehai'blog","subtitle":null,"description":"会做饭的厨子","author":"Zhang Zehai","url":"http://zehai.info"},"pages":[{"title":"Interview","date":"2019-04-06T14:55:46.000Z","updated":"2021-07-27T07:09:41.863Z","comments":true,"path":"Interview/index.html","permalink":"http://zehai.info/Interview/index.html","excerpt":"","text":"JS去重 排序,遍历顺序比较,不同就push到新数组 1234567891011let c=[1,2,3,4,5,6,1,2,3]function unique(arr){ let Arr=arr.sort() let b=[] for(let i=0;i<Arr.length;i++){ if(Arr[i]!==Arr[i+1]){ b.push(Arr[i]) } } return b} indexOf遍历 12345678910let c=[1,2,3,4,5,6,1,2,3]function unique(arr){ let b=[] for(let i=0;i<arr.length;i++){ if(b.indexOf(arr[i])==-1){ b.push(arr[i]) } } return b} map()/forEach() set 123456let c=[1,2,3,4,5,6,1,2,3]function unique(arr){ let b=new Set(arr) let c=Array.from(b) return c} 递归regressionFibonacci1234567891011121314151617181920212223242526//迭代,number=40,time=1msint Fibonacci(int number) { if (number <= 0) { return 0; } if (number == 1 || number == 2) { return 1; } int first = 1, second = 1, third = 0; for (int i = 3; i <= number; i++) { third = first + second; first = second; second = third; } return third;}//regression n=40,time=272mspublic int Fibonacci(int n) { if (n <= 0) { return 0; } if (n == 1||n==2) { return 1; } return Fibonacci(n - 2) + Fibonacci(n - 1);} 变种:青蛙跳台阶数据结构链表翻转链表有序链表合并123456789101112131415public ListNode Merge(ListNode list1,ListNode list2) { if(list1 == null){ return list2; } if(list2 == null){ return list1; } if(list1.val <= list2.val){ list1.next = Merge(list1.next, list2); return list1; }else{ list2.next = Merge(list1, list2.next); return list2; } } 堆和队列用队列实现堆栈用堆栈实现队列SortingHTTPS的认证过程 TCP/IP三次握手 Java基础HashMap实现原理HashMap是拉链(Hash再散列的一种方式)的一种实现方式(JDK1.7之前,以及1.8以后,阈值小于8),左边一列为数组(位桶数组),右边为链表(Entry链,通过equal判断) Hashmap在JDK1.8,阈值大于8后,采用红黑树 1234567891011121314151617static final class TreeNode extends LinkedHashMap.Entry { TreeNode parent; // 父节点 TreeNode left; //左子树 TreeNode right;//右子树 TreeNode prev; // needed to unlink next upon deletion boolean red; //颜色属性 TreeNode(int hash, K key, V val, Node next) { super(hash, key, val, next); } //返回当前节点的根节点 final TreeNode root() { for (TreeNode r = this, p;;) { if ((p = r.parent) == null) return r; r = p; } } HashMap,HashTable,ConcurrentHashMap!!可能有错,还未梳理完成 hashmap hashtable concurrentHashMap 底层 数组+链表 数组+链表 数组+链表 线程安全 √ × × null kv不能为空 kv可以为空 初始大小 11 16 扩容 2*old+1 old*2 Redis使用场景 缓存 登录session feature: 基于内存(快的原因) 数据结构简单(set hash list zset) 单线程,避免不必要的上下文切换和竞争条件,没有锁,多路I/O复用,非阻塞(和Nodejs一样) 分布式Kafka优缺点 高吞吐量的消息队列,基本组件:消费者,生产者,node节点等 副本……………… 概念幂等性 数学上,对于x,有f(x)=f(f(x)),则成为幂等性 在分布式环境下,表示对同样的请求,在一次或者多次请求的情况下,对系统的使用资源是一样的,保证失败重试不会导致提交两次 堆栈内存堆内存:是线程共享的,new 栈内存:线程私有的,基本类型的变量和对象的引用变量 数据库事务Transaction是一个操作序列,不可分割的工作单位,以begin transaction开始,以rollback/commit结束 特性 Desc 原子性Atomicity 全对或回滚 一致性Consistency 多次事务结果相同 隔离性Isolation 事务间互不影响 持久性Durability 一但成功即永久 并发一致性问题 丢失修改,后事务对一个未完成事务修改覆盖 脏读,后事务对一个未完成事务修改过的数据读 不可重复读,后事务读取过程中数据被修改 幻读,类似不可重复,区别于是插入操作修改 四种隔离级别 未提交读 提交读 可重复读(MYSQL默认隔离级别) 可串行化(事务全部串行,效率低) 乐观锁&悲观锁悲观锁:先锁再操作,适合数据修改频繁场景 乐观锁:先读,读的时候在判断是否有事务再更新,有则重读(通过加版本号或者时间戳作为字段为判断依据,缺点就是每次数据更新都需要更新这个字段) 封锁类型 排它锁X 共享锁S 只读共享 意向锁 三级封锁协议(待补充,不重要) 一级,X 二级 三级 数据库索引的实现原理(B+树)innoDB中使用的B+树 IO次数少,B+树中间节点存储索引,数据在叶子节点中 查询B树要遍历,B+只需要遍历叶子节点 why 选择B+ 效率比O(1)好 B树索引支持大小比较,范围查找 innodb和myISAM区别 innodb支持事务 myisam仅有表级锁,innodb表+行锁 innodb支持外键 innodb在线热备份 数据库优化SQL优化该部分基本需要遵守一些准则,合理利用索引 避免!=,>,<,null的判断(索引失效) 返回必要的列,减小select * (graphQL解决方案) Limit限制 索引优化(主) 建立合适的索引,不能太多,也不能太少(太少全表检索用不到索引,太多就冗余,索引可能比数据还多?) 表结构优化 范式遵循(默认3) 选择合适类型,尽可能不要存储null 水平切分,根据哈希取模,将一个表水平切分,当一个表中数据增多时,sharding,将数据分不到集群的不同节点上 垂直切分,将不常用的字段独立放一个表中 配置优化 增加TCP支持的队列 MYSQL配置缓存池大小 etc 硬件优化(次) 磁盘性能 CPU 内存 主从复制Replication将数据从一个mysql中复制到其他服务器中,默认异步同步 主服务器binary log dump线程将数据更改写入日志 从服务器IO线程读取数据修改日志,写入本地relaylog 从服务器SQL线程,读取relaylog解析并执行 why选择主从复制 读写分离,主写从读 缓解锁竞争 从使用myisam提升查询性能 数据实时备份 降低单个IO访问频率(显然) 索引分类 普通索引 唯一索引,索引值唯一,可空 主键索引,唯一不可空 复合索引 覆盖索引 聚集索引 分区索引 虚拟索引 MVCCmulti-version concurrency control 每行记录后面保存两个隐藏的列用来存储版本号和删除版本号 创建版本号:创建数据时的事务版本号 删除版本号:同上 范式依次严格 第一范式:不存在可分的列 第二范式:主键 第三范式(默认)外键 第 表连接方式内连接:满足连接条件的行组合起来(交集) 自然连接 等值连接 外连接:左连接,右连接,全连接 交叉连接:笛卡尔积,即m*n 存储过程存储过程是事先经过编译并存储在数据库中的一段SQL语句的集合。 想要实现相应的功能时,只需要调用这个存储过程就行了(类似于函数,输入具有输出参数)。 优点: 预先编译,效率高 封装操作减少网络通信 可复用 安全性高,可以让低权限用户直接调用 易维护 缺点(可以忽略): 移植性差 调试复杂 修改复杂 删除命令 delete删除全表数据或部分数据,触发日志,可还原 truncate清空所有数据不可回滚,自增重置为1,无日志 drop删除数据,表,索引,约束不能回滚,无日志 视图取出来的数据可视化,操作不影响数据库中数据 1234CREATE VIEW view_name ASSELECT column_name(s)FROM table_nameWHERE condition 游标定位查询返回结果集中的特定行,以对特定行进行操作,分普通游标和滚动游标(了解,应用不多) ACID特性原子性: 一致性: 隔离性: 持久性: 如何rollback写入前会有redo log和undo log,如果失败会逆向还原到事务开始之前 B-Tree vs B+Tree原因:每个节点融入更多元素,多叉 B树存储数据: 对比B+的Feature看 B+存储数据: 引入原因:相对于BTree高度更低(非叶子节点数据更多),查询效率更高(聚簇索引+叶节点链环) 查询效率高 如查询3-7,查到3后可以直接链式遍历到7(大于小于等搜索效率高) key都在叶节点,非叶子节点不存储数据,提高效率 同等情况下,一般B+更矮 默认每个页的大小为 16K,即每个叶子节点为一页 聚簇索引clustered index,叶子节点是数据,数据在物理上是按主键 key 顺序存放 非聚簇索引secondary index,叶子节点是key,数据在物理上按插入的顺序存放 灵魂拷问Java多线程 线程池的原理,为什么要创建线程池? 线程的生命周期,什么时候会出现僵死进程; 什么实现线程安全,如何实现线程安全; 创建线程池有哪几个核心参数?如何合理配置线程池的大小? synchronized、volatile区别、synchronized锁粒度、模拟死锁场景、原子性与可见性; JVM相关 JVM内存模型,GC机制和原理;GC分哪两种;什么时候会触发Full GC? JVM里的有几种classloader,为什么会有多种? 什么是双亲委派机制?介绍一些运作过程,双亲委派模型的好处;(这个我真的不会…) 什么情况下我们需要破坏双亲委派模型; 常见的JVM调优方法有哪些?可以具体到调整哪个参数,调成什么值? JVM虚拟机内存划分、类加载器、垃圾收集算法、垃圾收集器、class文件结构是如何解析的; Java扩展 红黑树的实现原理和应用场景; NIO是什么?适用于何种场景? Java9比Java8改进了什么; HashMap内部的数据结构是什么?底层是怎么实现的? 说说反射的用途及实现,反射是不是很慢,我们在项目中是否要避免使用反射; 说说自定义注解的场景及实现; List和Map区别,Arraylist与LinkedList区别,ArrayList与Vector 区别; Spring Spring AOP的实现原理和场景;(应用场景很重要) Spring bean的作用域和生命周期; Spring Boot比Spring做了哪些改进?Spring 5比Spring4做了哪些改进;(惭愧呀,我们还在用Spring4,高版本的没关心过) Spring IOC是什么?优点是什么? SpringMVC、动态代理、反射、AOP原理、事务隔离级别; 中间件 Dubbo完整的一次调用链路介绍; Dubbo支持几种负载均衡策略? Dubbo Provider服务提供者要控制执行并发请求上限,具体怎么做? Dubbo启动的时候支持几种配置方式? 了解几种消息中间件产品?各产品的优缺点介绍; 消息中间件如何保证消息的一致性和如何进行消息的重试机制? Spring Cloud熔断机制介绍; Spring Cloud对比下Dubbo,什么场景下该使用Spring Cloud? 数据库 锁机制介绍:行锁、表锁、排他锁、共享锁; 乐观锁的业务场景及实现方式; 事务介绍,分布式事物的理解,常见的解决方案有哪些,什么事两阶段提交、三阶段提交; MySQL记录binlog的方式主要包括三种模式?每种模式的优缺点是什么? MySQL锁,悲观锁、乐观锁、排它锁、共享锁、表级锁、行级锁; 分布式事务的原理2阶段提交,同步异步阻塞非阻塞; 数据库事务隔离级别,MySQL默认的隔离级别、Spring如何实现事务、 JDBC如何实现事务、嵌套事务实现、分布式事务实现; SQL的整个解析、执行过程原理、SQL行转列; Redis Redis为什么这么快?redis采用多线程会有哪些问题? Redis支持哪几种数据结构; Redis跳跃表的问题; Redis单进程单线程的Redis如何能够高并发? Redis如何使用Redis实现分布式锁? Redis分布式锁操作的原子性,Redis内部是如何实现的?","raw":null,"content":null},{"title":"Docker","date":"2020-02-11T01:51:34.000Z","updated":"2021-07-27T07:09:41.862Z","comments":true,"path":"Docker/index.html","permalink":"http://zehai.info/Docker/index.html","excerpt":"","text":"hello worlddokcer containerdocker imagedocker swarm","raw":null,"content":null},{"title":"ComputerOS","date":"2019-04-21T04:15:39.000Z","updated":"2021-07-27T07:09:41.862Z","comments":true,"path":"ComputerOS/index.html","permalink":"http://zehai.info/ComputerOS/index.html","excerpt":"","text":"conceptOS(Operating System)是控制和管理整个计算机系统的硬件和软件资源,合理调度计算机的工作和资源分配,提供给用户和其他软件接口和环境。 四大特征 并发 共享 虚拟 异步 并发概念:指多个事件在同一时间间隔内发生,宏观上是同时发生,微观上是交替发生(区别于并行:多个事件同一时刻发生) 共享分为:互斥共享,同时共享 虚拟把物理上的实体变为若干个逻辑上对应物,如4GB内存可以同时运行大于4G的软件(时分复用或者空分复用) 异步多个程序并发执行,断断续续同步推进 运行机制与体系结构运行机制指令=特权指令+非特权指令 特权:如内存清零 非特权:如普通运算 CPU=用户态(非核心)+核心态(核心+非核心) 程序状态寄存器PSW,0为用户态,1位核心态 内核: 时钟管理(计时) 中断处理 原语 系统资源管理(进程,存储器,设备管理) 大内核:高性能但维护麻烦 微内核:结构清晰但切换开销大 中断和异常中断是CPU进入核心态,当前进程暂停,而核心态–>用户态只需要通过PSW的特权指令就可以进入 中断=内中断+外中断 内中断:异常,例外,陷入 指令中断 硬件故障(如缺页),软件中断(如编程语法错误) 外中断:外设请求,人工干预 系统调用 设备管理 文件管理 进程控制 进程通信 内存管理 传递系统调用参数–>限制性陷入指令(用户态)–>执行系统调用相应服务程序(核心态)–>返回用户程序 陷入指令在用户态执行,执行陷入指令后立即引发一个内中断,从而CPU进入核心态 库函数应用程序–>库函数–>系统调用 库函数目的:高级开发,更方便系统调用 进程why: what:是运行过程,是系统进行资源分配和调度的最小单位 进程段=程序段+数据段+PCB PCB=PID+UID+进程控制管理信息(进程状态,优先级)+资源分配(程序段指针,数据段指针,键盘鼠标)+处理机信息(寄存器值) feature: 动态性 并发性 独立性 异步性 组织方式: 链接方式(执行指针,就绪指针,阻塞指针) 索引方式(执行指针,就绪指针,索引指针) 五种状态 运行 就绪 阻塞 创建 终止","raw":null,"content":null},{"title":"Java","date":"2020-01-29T14:12:19.000Z","updated":"2021-07-27T07:09:41.863Z","comments":true,"path":"Java/index.html","permalink":"http://zehai.info/Java/index.html","excerpt":"","text":"[TOC] 面向对象概念面向对象(Object Oriented),对象就是真实世界中的实体,对象与实体是一一对应的,也就是说现实世界中每一个实体都是一个对象,它是一种具体的概念。换句话说,对象是真是存在内存的,实例化的类,比如你想要的粥,类就是锅里的粥,实例化就是盛出来放进碗中,而对象就是一碗粥,真实存在,可以被你操控,又或者你女朋友A是你择偶标准(类)实例化出来的对象,真实存在,但择偶标准并不真实存在 对象VS过程面向过程编程更加注重一个类解决一个问题,是一个解决方案,而不需要你去考虑这个对象具体怎么实现的 换句话说,面向过程就是生产一辆汽车,面向对象就是直接买一辆汽车开,你可以把汽车销售商理解为一个对象的提供商,为你提供服务。 三大核心三大核心都是尽最大的可能复用代码 继承定义类的基石(共同的属性和方法) 避免子类重复定义,单继承(只能继承一次)! 子类拥有父类的所有属性和方法(除了private修饰的属性不能拥有) 目的:代码复用 重载:同名函数不同参数 重写:重写方法实现方式(子类个性化) final的几个问题: final修饰的类不允许继承 final修饰的方法不允许重写 final 修饰属性,则该类的该属性不会进行隐式的初始化,所以 该final 属性的初始化属性必须有值,或在构造方法中赋值(但只能选其一,且必须选其一,因为没有默认值!),且初始化之后就不能改了,只能赋值一次 final 修饰变量,则该变量的值只能赋一次值,在声明变量的时候才能赋值,即变为常量 super的几个问题: 访问父类对象,如:super.age,super.eat() 子类构造的过程调用父类的构造方法,默认调用无参构造方法 封装(set/get)封装(Encapsulation)是指一种将抽象性函式接口的实现细节部份包装、隐藏起来的方法。 封装可以被认为是一个保护屏障,防止该类的代码和数据被外部类定义的代码随机访问。 要访问该类的代码和数据,必须通过严格的接口控制。 封装最主要的功能在于我们能修改自己的实现代码,而不用修改那些调用我们代码的程序片段。 适当的封装可以让程式码更容易理解与维护,也加强了程式码的安全性。 多态多态就是同一个接口,使用不同的实例而执行不同操作 多态性是对象多种表现形式的体现。 现实中,比如我们按下 F1 键这个动作: 如果当前在 Flash 界面下弹出的就是 AS 3 的帮助文档; 如果当前在 Word 下弹出的就是 Word 帮助; 在 Windows 下弹出的就是 Windows 帮助和支持。 基本数据类型内置数据类型 byte(-128~127即2^7,默认0) short(16位,默认0) int(32位,默认0) long(64位,默认0L) float(32位,默认0.0f) double(64位,默认0.0d) boolean(1位默认false) char(16位0~65535) 引用数据类型引用为指针的另一种变种,引用类型指向一个对象,数组 常量final修饰 拆箱和装箱why 数字字符日期等基本类型封装成对象方便处理,但是对于CPU来说,一个完整的对象需要很多的指令,对于内存来说,有需要很多的内存,性能自然很低,所以设计装箱和拆箱,是的基本类型在编程中当做非对象处理,在另外场合有当做对象处理 int的自动拆箱和装箱只在-128到127范围中进行,超过该范围的两个integer的 == 判断是会返回false的。 变量存储方式基本类型->栈内存 引用类型->堆内存 String和包装类string基础字符串属于对象,char属于基本类型,String greetings=“hello,world” 如果如果需要对字符串做很多修改,那么应该选择使用 StringBuffer & StringBuilder 类。通过append可以直接修改,Stringbuilder不是线程安全的但是又速度优势,建议使用,StringBuffer是线程安全 appen追加 reverse反转 delete移除 insert将int插入 replace替换等 String是final class不可变不可继承,由于不可变,所以拼接字符串会有很多中间无用的对象,所以会影响性能,但不影响正常小批量的字符串拼接 StringBuffer是解决上述问题的方案,提供append和 add方法,拼接至尾部,他的本质是一个县城安全的可修改的字符序列,添加了synchronized,但也付出了性能代价 很多情况下字符串拼接无需线程安全,则可以使用StringBuilder StringBuffer 和 StringBuilder 二者都继承了 AbstractStringBuilder ,底层都是利用可修改的char数组(JDK 9 以后是 byte数组)。 string基本用法 String s1 = “mpptest” String s2 = new String(); String s3 = new String(“mpptest”) “==”判断引用内容,equals判断引用地址 string类源码1234567891011public void intern () { //2:string的intern使用 //s1是基本类型,比较值。s2是string实例,比较实例地址 //字符串类型用equals方法比较时只会比较值 String s1 = "a"; String s2 = new String("a"); //调用intern时,如果s2中的字符不在常量池,则加入常量池并返回常量的引用 String s3 = s2.intern(); System.out.println(s1 == s2);//false System.out.println(s1 == s3);//true} (此处待补充) String和JVM Java栈(线程私有) 每个Java虚拟机线程都有自己的Java虚拟机栈,Java虚拟机栈用来存放栈帧,每个方法被执行的时候都会同时创建一个栈帧(Stack Frame)用于存储局部变量表、操作栈、动态链接、方法出口等信息。每一个方法被调用直至执行完成的过程,就对应着一个栈帧在虚拟机栈中从入栈到出栈的过程 Java堆(线程共享)存放所有对象 方法区(线程共享) 方法区在虚拟机启动的时候被创建,它存储了每一个类的结构信息,例如运行时常量池、字段和方法数据、构造函数和普通方法的字节码内容、还包括在类、实例、接口初始化时用到的特殊方法。在JDK8之前永久代是方法区的一种实现,而JDK8元空间替代了永久代,永久代被移除,也可以理解为元空间是方法区的一种实现。 常量池(线程共享) 常量池常被分为两大类:静态常量池和运行时常量池。 静态常量池也就是Class文件中的常量池,存在于Class文件中。 运行时常量池(Runtime Constant Pool)是方法区的一部分,存放一些运行时常量数据。 String为什么不可变变量在栈中,数据本身在堆中,引用不可变 String常用工具apache-commons final关键字Java类和包抽象类和接口代码块和代码执行顺序自动拆箱装箱Class类和Object类异常回调反射泛型枚举类注解IO流多线程内部类javac和javapjava8新特性类和包序列化和反序列化继承封装多态实现原理","raw":null,"content":null},{"title":"KnowledgeTree","date":"2020-01-18T15:03:00.000Z","updated":"2021-07-27T07:09:41.864Z","comments":true,"path":"KnowledgeTree/index.html","permalink":"http://zehai.info/KnowledgeTree/index.html","excerpt":"","text":"[TOC] 数据结构队列队列有两种实现方式,一个是连续空间的数组(无需空间存指针,但是扩容只能整体复制到新的大数组,而且线性,基本首位相连使用),一种是链表形态(需要额外空间存指针,但扩容直接追加,以及增删元素不需要动其他元素) Java中队列分为阻塞和非阻塞两种,顾名思义,阻塞队列是一个一个顺序执行,非阻塞队列是并发的 非阻塞队列:ConcurrentLinkedQueue(无界线程安全),采用CAS机制(compareAndSwapObject原子操作)。 阻塞队列:ArrayBlockingQueue(有界)、LinkedBlockingQueue(无界)、DelayQueue、PriorityBlockingQueue,采用锁机制;使用 ReentrantLock 锁。 阻塞队列的DelayQueue可以做成延时队列,可以见这篇文章→:关于Promise的思考,原本Node需要借助Promise循环一次只取一个特性的延时队列,可以使用delayQueue直接求解 队列除了数组,链表的区别,阻塞 分类,还可以安全区分,简单来说就是多线程下并发读写是否会出问题。 集合Set:注重独一无二的性质,该体系集合可以知道某物是否已近存在于集合中,不会存储重复的元素 hashset hashtable treeset 链表、数组链表List数组Array其实和队列queue是一种东西,只是队列(或者堆栈)是特殊化的链表/数组,他们限制了元素的进出方式,解决了顺序处理/递归压栈的问题。 不过链表,数组区别于set,前者是有序的,set是无序不重复的 List主要分为3类,ArrayList, LinkedList和Vector,都继承自Collection,只是各自有自己的特性的方法 ArrayList是一个数组实现的列表,由于数据是存入数组中的,所以它的特点也和数组一样,查询很快,但是中间部分的插入和删除很慢 LinkedList还是一个双向链表 Vector就是ArrayList的线程安全版,它的方法前都加了synchronized锁,其他实现逻辑都相同。如果对线程安全要求不高的话,可以选择ArrayList,毕竟synchronized也很耗性能 字典、关联数组栈树树结构是一对多的数据结构 他的应用包括:红黑树,数据库存储,磁盘文件存储等 二叉树每个节点最多有两个叶子节点 完全二叉树平衡二叉树二叉查找树BST红黑树B系列树B-树是一种多路搜索树 关键字集合分布在整颗树中; 任何一个关键字出现且只出现在一个结点中; 搜索有可能在非叶子结点结束; 其搜索性能等价于在关键字全集内做一次二分查找; 自动层次控制 B+ 树是一种树数据结构,是一个n叉树,每个节点通常有多个孩子,一棵B+树包含根节点、内部节点和叶子节点。根节点可能是一个叶子节点,也可能是一个包含两个或两个以上孩子节点的节点 B+ 树通常用于数据库和操作系统的文件系统中 B* 树 是B+树的变体,在B+树的非根和非叶子结点再增加指向兄弟的指针; LSM树LSM(Log-Structured Merge-Trees)和 B+ 树相比,是牺牲了部分读的性能来换取写的性能(通过批量写入),实现读写之间的平衡。 Hbase、LevelDB、Tair(Long DB)、nessDB 采用 LSM 树的结构。LSM可以快速建立索引。 B+ 树读性能好,但由于需要有序结构,当key比较分散时,磁盘寻道频繁,造成写性能较差。 LSM 是将一个大树拆分成N棵小树,先写到内存(无寻道问题,性能高),在内存中构建一颗有序小树(有序树),随着小树越来越大,内存的小树会flush到磁盘上。当读时,由于不知道数据在哪棵小树上,因此必须遍历(二分查找)所有的小树,但在每颗小树内部数据是有序的。 极端的说,基于LSM树实现的HBase的写性能比MySQL高了一个数量级,读性能低了一个数量级。 优化方式:Bloom filter 替代二分查找;compact 小数位大树,提高查询性能。 Hbase 中,内存中达到一定阈值后,整体flush到磁盘上、形成一个文件(B+数),HDFS不支持update操作,所以Hbase做整体flush而不是merge update。flush到磁盘上的小树,定期会合并成一个大树。 BitSet常用算法排序+查找布隆过滤器字符串比较DFS+BFS贪心回溯剪枝动态规划朴素贝叶斯推荐算法推荐算法通常被分为四大类 协同过滤推荐算法 基于内容的推荐算法 混合推荐算法 流行度推荐算法 最小生成树最短路径算法并发概念多线程线程安全事务锁操作系统原理CPU进程线程协程通信Linux设计模式六大原则 开闭原则:对扩展开放,对修改关闭,多使用抽象类和接口。 里氏替换原则:基类可以被子类替换,使用抽象类继承,不使用具体类继承。 依赖倒转原则:要依赖于抽象,不要依赖于具体,针对接口编程,不针对实现编程。 接口隔离原则:使用多个隔离的接口,比使用单个接口好,建立最小的接口。 迪米特法则:一个软件实体应当尽可能少地与其他实体发生相互作用,通过中间类建立联系。 合成复用原则:尽量使用合成/聚合,而不是使用继承。 23种常见设计模式应用场景单例模式单例模式:单例模式的意思就是只有一个实例。单例模式确保某一个类只有一个实例,而且自行实例化并向整个系统提供这个实例。这个类称为单例类。 单例模式有三种: 懒汉式单例:第一次调用初始化,但初始化时需加锁 12345678910public class Singleton{ private static Singleton singleton; private Singleton {}; public static synchronized Sigleton getInstance{ if(singleton ==null){ singleton=new Singleton(); } retrun singleton; }} 饿汉式单例:类加载初始化,后续一直存在,浪费内存 1234567public class Singleton{ private static final Singleton SINGLETON=new Singleton(); private Signleton(){ } public static Signleton getInstance(){ retrun SINGLETON; }} 登记式单例:内部类在外部调用加载,无需用锁 123456789public class Singleton{ private Sigleton(){} public static Singleton getInstance(){ retrun Holder.SINGLETON; } private static class Holder{ private static final Singleton SINGLETON=new Singleton(); }} 责任链模式MVCIOCAOPUML微服务运维监控APM统计分析持续继承CI/CDJenkins环境分离自动化运维Ansiblepuppetchef测试TDD理论单元测试压力测试全链路压测A/B、灰度、蓝绿测试##虚拟化 KVMXenOpenVZ容器化Docker云技术OpenstackDevOps文档管理中间件Web ServerNginxOpenRestyTengineApache HttpdTomcatJetty缓存本地缓存客户端缓存服务端缓存Web缓存MemcachedRedis架构回收策略Tair消息队列消息总线消息顺序RabbitMQRocketMQActiveMQKafkaRedisZeroMQ定时调度单机定时调度分布式定时调度RPCRPC = Remote Procedure Call 目的:调用远程服务接口如同调用本地(方便本地开发,及服务间调用) 构成:server,client,registry(Redis,zookeeper,consul,more) 技术:动态代理(CgLib,Javasisit),序列化,NIO(Netty),注册中心 流程: clent调用本地方法 client stub,封装成为网络传输消息体 client stub 从registry获取地址发送 server解码,调用本地方法,返回到server stub server stub 结果打包返回给client client解码,获取结果 优秀框架: 框架 简介 开发语言 分布式 多序列化框架支持 Dubbo 阿里,Java高性能优秀的服务框架 java √ √ Motan 微博,Java框架 java √ √ rpcx Go go √ √ gRPC Google,基于protoBuf序列化,不是分布式 多语言 × × thrift Apache,跨语言 多语言 × × DubboThriftgRPC数据中间件Sharding Jdbc日志系统日志搜集配置中心API网关网络协议OSITCPIPHTTPHTTP2HTTPS网络模型EpollJava NIOKqueue连接和短连接框架零拷贝序列化序列化是二进制协议 Hessian Protobuf Hessianprotobuf数据库搜索引擎性能大数据安全常用开源框架分布式设计设计思想+开发模式项目管理通用业务术语技术趋势政策法规架构师素质团队管理资讯技术资源","raw":null,"content":null},{"title":"Like","date":"2020-10-12T07:19:53.000Z","updated":"2021-07-27T07:09:41.866Z","comments":true,"path":"Like/index.html","permalink":"http://zehai.info/Like/index.html","excerpt":"","text":"record something 🎬Movie(数据来源:淘票票订单记录,肯定有遗漏的) 2020-10-11:姜子牙 2020-10-03:我和我的家乡 2020-09-30:死无对证 2019-07-27:🌟哪吒之魔童降世 2019-04-27:复仇者联盟4:终局之战 2019-02-17:疯狂的外星人 2019-02-07:流浪地球 2018-12-30:天气预爆 2018-07-28:西红柿首富 2018-07-14:🌟我不是药神 2017-06-18:🌟异形:契约 一些忘了时间的电影 外出偷马 🌟大护法 彼岸花 🌟西小河的夏天 复仇者联盟3 黑豹 雷神3 🎮 Switch / Game购买记录 2020-10-06:超级马里奥兄弟U豪华版 2020-09-10:overcooked 1+2 2020-09-06:健身环大冒险 2020-09-04:swtich国行 📱Phone👦 2020-07-13:FaceNote F1 4+32G Black 1499 2019-10-19:Oneplus 7T 8+128G Blue 2999 2017-12-01:Oneplus 5T 6+64G Black 2999 2015-05-01:Mi 4s 3+64G White 1699 2014-07-12:Vivo Y18L 2+8G White 2199 👨 2019-11-14:Huawei Mate 30 4G 6+128 Black 3999 2018-07-28:Oneplus6 6+64G Black 3199 2017-xx-xx:Huawei Mate 8 3+32G Silvery 2015-04-13:Meizu Blue 2+16G White 1199 More :Lenovo A830 and Nokia Series 👩 2020-09-18:RealmeX7 Pro 8+128G C-Colour 1999 2018-02-13:Mi 6 4+64G Blue 2099 2017-01-05:Lenovo zuk z2 3+32G White 1199 2014-xx-xx:Vivo Y22L 🎵 Music 2020-10-12","raw":null,"content":null},{"title":"Kubernates","date":"2020-03-31T02:48:01.000Z","updated":"2021-07-27T07:09:41.865Z","comments":true,"path":"Kubernates/index.html","permalink":"http://zehai.info/Kubernates/index.html","excerpt":"","text":"0.Hello Minikube1.Basics1.01.1Create a Cluster master 管理cluster node 为工作节点,拥有kubelet(管理Node、与master沟通的agent) minikube:提供k8s基本的操作,如start,stop,delete,status minikube –help minikube start kubectl:与k8s交互,kubectl controls the Kubernetes cluster manager kubectl –help 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556Basic Commands (Beginner): create Create a resource from a file or from stdin. expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service run Run a particular image on the cluster set Set specific features on objectsBasic Commands (Intermediate): explain Documentation of resources get Display one or many resources edit Edit a resource on the server delete Delete resources by filenames, stdin, resources and names, or by resources and label selectorDeploy Commands: rollout Manage the rollout of a resource scale Set a new size for a Deployment, ReplicaSet or Replication Controller autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationControllerCluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster info top Display Resource (CPU/Memory/Storage) usage. cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodesTroubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers. auth Inspect authorizationAdvanced Commands: diff Diff live version against would-be applied version apply Apply a configuration to a resource by filename or stdin patch Update field(s) of a resource using strategic merge patch replace Replace a resource by filename or stdin wait Experimental: Wait for a specific condition on one or many resources. convert Convert config files between different API versions kustomize Build a kustomization target from a directory or a remote url.Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash or zsh)Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins. version Print the client and server version information $ kubectl cluster-infoKubernetes master is running at https://172.17.0.14:8443KubeDNS is running at https://172.17.0.14:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy $ kubectl get nodesNAME STATUS ROLES AGE VERSIONminikube NotReady master 9s v1.17.3 1.2deploy an App如果你配置了k8s Deployment configuration,你可以部署容器应用在k8s cluster上。当你配置了Deployment ,k8s master会schedule (按时?)通知cluster中所有的node节点上的应用。 当你的app 实例创建的时候,k8s Deployment Controller会持续监测,如果部署的节点故障,Deployment Controller会给app重新换个节点 你可以通过kubectl来create & manage Deployment,kubectl使用k8s API 来和cluster交互。 当你创建一个Deployment时候,你需要指定app所用的容器镜像和需要运行的副本replcas,当然也可以创建后更新这些信息 12345678910111213141516171819202122232425$ kubectl get nodesNAME STATUS ROLES AGE VERSIONminikube NotReady master 15s v1.17.3$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1deployment.apps/kubernetes-bootcamp created//指定deploymentname 和image Location//1.寻找合适node部署 2.准备部署 3.配置 新节点重启的配置//kubectl可以创建proxy来对外暴露你的服务$ kubectl proxyStarting to serve on 127.0.0.1:8001$ curl http://localhost:8001/version{ "major": "1", "minor": "17", "gitVersion": "v1.17.3", "gitCommit": "06ad960bfd03b39c8310aaf92d1e7c12ce618213", "gitTreeState": "clean", "buildDate": "2020-02-11T18:07:13Z", "goVersion": "go1.13.6", "compiler": "gc", "platform": "linux/amd64"} 1.3 Explore App1.3.1Pods当你create Deployment时,k8s会创建一个Pod来托管app实例,Pod是k8s abstraction,代表一组(单个或多个)app 容器,共享资源,资源包括: 共享内存—>as Volumes 网络(as a unique cluster ip address) 信息(如何run each container) pod可以关联多个app容器,将他们视作一个服务,他们有相同的ip地址,相同的端口,总是co-located & co-scheduled,并分享上下文 pods是k8s的最小单元,当你create deployment 时会创建pods,容器在pods内部,每个pods都绑定在刚创建的node上,直到node终止,如果node down则会在别的可用的node上部署相同的pod 1.3.2nodepod始终运行在node上。node是依赖于master,运行在vm或者物理机上的worker machine。一个node可以拥有多个Pods,并且被master管理者 1.3.3排除故障 kubectl get - list resources kubectl describe - show detailed information about a resource kubectl logs - print the logs from a container in a pod kubectl exec - execute a command on a container in a pod 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465$ kubectl describe podsName: kubernetes-bootcamp-765bf4c7b4-dnm2jNamespace: defaultPriority: 0Node: minikube/172.17.0.10Start Time: Tue, 31 Mar 2020 05:54:38 +0000Labels: pod-template-hash=765bf4c7b4 run=kubernetes-bootcampAnnotations: <none>Status: RunningIP: 172.18.0.5IPs: IP: 172.18.0.5Controlled By: ReplicaSet/kubernetes-bootcamp-765bf4c7b4Containers: kubernetes-bootcamp: Container ID: docker://2c2a5074ee7c87a4bdfe6b73db2dab8168d407642e2c76df8edd56b45441ec0b Image: gcr.io/google-samples/kubernetes-bootcamp:v1 Image ID: docker-pullable://jocatalin/kubernetes-bootcamp@sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af Port: 8080/TCP Host Port: 0/TCP State: Running Started: Tue, 31 Mar 2020 05:54:41 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wk8gk (ro)Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled TrueVolumes: default-token-wk8gk: Type: Secret (a volume populated by a Secret) SecretName: default-token-wk8gk Optional: falseQoS Class: BestEffortNode-Selectors: <none>Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300sEvents: <none>kubectl logs $POD_NAME//exec$ kubectl exec $POD_NAME envPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=kubernetes-bootcamp-765bf4c7b4-9xxqtKUBERNETES_PORT_443_TCP_PROTO=tcpKUBERNETES_PORT_443_TCP_PORT=443KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1KUBERNETES_SERVICE_HOST=10.96.0.1KUBERNETES_SERVICE_PORT=443KUBERNETES_SERVICE_PORT_HTTPS=443KUBERNETES_PORT=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443NPM_CONFIG_LOGLEVEL=infoNODE_VERSION=6.3.1HOME=/root$ kubectl exec -ti $POD_NAME bashroot@kubernetes-bootcamp-765bf4c7b4-9xxqt:/# 1.4 Explor App Publicpods具备生命周期,当worker节点 die的时候,其上的pods也会lost。副本集(replicaSet)会故障时创建新的Pods保障程序正常运行 k8s有一个抽象的service是定义了pods逻辑集合和访问Pod的策略,它使得pods之间解耦,使用YAML or JSON实现。pods集合通常由LabelSelector决定。公开服务可以通过如下方法: clusterIP(default)内部访问 nodePort LoadBalancer ExternalName 通过标签的方式,可以进行逻辑标记 标记环境ENV 标记版本 分类object 123456789101112131415161718192021222324252627282930313233343536$ kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080service/kubernetes-bootcamp exposed$ kubectl describe deploymentName: kubernetes-bootcampNamespace: defaultCreationTimestamp: Wed, 01 Apr 2020 08:44:09 +0000Labels: run=kubernetes-bootcampAnnotations: deployment.kubernetes.io/revision: 1Selector: run=kubernetes-bootcampReplicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surgePod Template: Labels: run=kubernetes-bootcamp Containers: kubernetes-bootcamp: Image: gcr.io/google-samples/kubernetes-bootcamp:v1 Port: 8080/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailableOldReplicaSets: <none>NewReplicaSet: kubernetes-bootcamp-765bf4c7b4 (1/1 replicas created)Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 5m44s deployment-controller Scaled up replica set kubernetes-bootcamp-765bf4c7b4 to 1 1.5 Scale App1.6 Update App2.Configuration3.Stateless Application4.Stateful Application5.Clusters6.Services","raw":null,"content":null},{"title":"Node.js","date":"2019-04-05T02:49:57.000Z","updated":"2021-07-27T07:09:41.867Z","comments":true,"path":"Nodejs/index.html","permalink":"http://zehai.info/Nodejs/index.html","excerpt":"","text":"what Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Node是JavaScript运行环境 事件驱动,非阻塞IO模型 使用npm管理包 基本原理 Chrome V8 是 Google 发布的开源 JavaScript 引擎,采用 C/C++ 编写,在 Google 的 Chrome 浏览器中被使用。Chrome V8 引擎可以独立运行,也可以用来嵌入到 C/C++ 应用程序中执行。 Event Loop 事件循环(由 libuv 提供) Thread Pool 线程池(由 libuv 提供) 带来的好处 用户体验 资源分配 Blocking 与Non-blocking大量诸如IO,锁操作会造成堵塞,node标准库中所有IO都提供了非阻塞的一步版本,接受回调函数 回调函数可以放入底层线程池操作,不阻塞主线程 可以自定义回调函数,处理返回结果 我们可以直接比较一下阻塞和非阻塞的代码: 123456789//blocking 且如果错误需要catch否则程序会**崩溃**const fs = require('fs');const data = fs.readFileSync('/file.md'); // blocks here until file is read//non-blockingconst fs = require('fs');fs.readFile('/file.md', (err, data) => { if (err) throw err;});//执行第6行后,放进队列,开始执行下面的业务数据 并发和吞吐量Node起服务大概会启动一个进程,包含7个线程, 1 个 Javascript 执行主线程;1 个 watchdog 监控线程用于处理调试信息;1 个 v8 task scheduler 线程用于调度任务优先级,加速延迟敏感任务执行;4 个 v8 线程(可参考以下代码),主要用来执行代码调优与 GC 等后台任务;以及用于异步 I / O 的 libuv 线程池。 并发指的是执行完其他工作后,事件循环执行回调的能力,高并发原理:如一个请求50ms请求,其中45ms都在数据库,那么我可以5ms处理完,推入队列,其余时间继续响应其他请求。 eventloop是JS的特性,在其他语言中,会创建线程来处理并发工作(比如同时两个请求,创建两个线程来处理,但是他们之间可能访问数据会加锁,如果有100个请求,访问同一个数据,加个锁,99个请求都只能等着了) 混合阻塞代码和非阻塞代码处理io应该避免如下写法: 1234567const fs = require('fs');fs.readFile('/file.md', (err, data) => { if (err) throw err; console.log(data);});fs.unlinkSync('/file.md');//unlink first//应该将unlink写入回调函数 EventLoopwhateventloop(事件循环)给予node非阻塞IO的优势,尽管JS是单线程 由于大多数内核支持多线程,他们可以再后台执行多个任务,当某个任务完成的时候,内核会告诉node,以便将callback添加到poll队列来执行 Explained当node启东时,他会初始化eventloop,处理输入的script(或者放入REPL),这些script可以执行异步API调用,定时器,或者调用process.nextTick(),然后开始处理事件循环 其中每个矩形被称作phase(个人理解:阶段) 每一个phase都有一个有callbacks等待执行的FIFO的队列,尽管每个phase都有自己特殊的地方,但是通常,当eventloop执行到某一个phase时候,他将执行该阶段特定的操作,然后调用queue里面的callback function直到队列为空或者达到该phase最大的回调执行数量。 timers:setTimeout() and setInterval() pending callbacks: 执行推迟到下一个tick的IO回调 idle, prepare: 内部使用 poll: 接收新的IO event,执行与IO相关的回调(关闭回调,计时器回调,setImmediate)(该phase可能会block) check: setImmediate() close callbacks: 关闭回调, 如 socket.on('close', ...). 由于执行某一个phase都可能新增更多的操作任务,一些新的event也会进入poll 队列中排队,因此可在处理轮询事件时候将poll event排队,使得长时间运行的回调也可以使轮询阶段运行的时间比计时器的阈值长很多。 待更新计时器目的:设定时间段后执行函数,直接使用无需require 使用nodejs控制时间连续性settimeout异步编程Buffer The Buffer class was introduced as part of the Node.js API to enable interaction with octet streams in TCP streams, file system operations, and other contexts. 原因:应用需要 处理网络协议 操作数据库 处理图片 接受上传文件等 处理大量二进制数据,JavaScript自由的字符串不能满足这些要求 结构与C++结合:node_buffer–>Buffer/SlowBuffer 也就是JavaScript的Buffer或者SlowBuffer依赖于C++的内建模块,buffer内存不归v8管理,是堆外内存 声明123456const str="helloworld"const buf=new Buffer(str,'utf-8')console.log(buf)//=><Buffer xx xx xx>16进制数字buf[22]=10//只能赋值0-255的数值,否则会取余256 内存分配Node的C++层面实现内存申请,在JavaScript中分配内存的策略,Node采用slab的动态内存管理机制,slab的3种状态 full:完全分配 partial:部分分配 empty:没有分配 Buffer.poolSize=8*1024,即以8kb为大小Buffer的分界,小于8Kb拼单,大于8kb分配大的slab被大buffer独占 转换可以转换的类型 asc2 utf-8 utf-16LE/UCS-2 Base64 Binary Hex 采用new Buffer(str,[encoding]),默认utf-8 buf.write(string,[offset],[length],[encoding])默认utf-8 buf.toString([encoding],[start],[end])默认utf-8 拼接性能性能大概是字符串的一倍 EventLooplibuv介绍page","raw":null,"content":null},{"title":"README","date":"2020-03-10T09:14:53.000Z","updated":"2021-07-27T07:09:41.867Z","comments":true,"path":"README/index.html","permalink":"http://zehai.info/README/index.html","excerpt":"","text":"Knowledgeall in Knowledge 个人博客 LeetCode 专业课 计算机网络 计算机操作系统 数据结构 Interview B-Tree vs B+ Tree Node.js feature 异步IO 异步编程 内存控制 理解buffer 网络编程 web应用 多进程 测试即调试 eventLoop模型 Java基础 集合 List Map Set 线程 类加载 IO JVM 锁 常见问题 Data Structure String Linked List Binary Tree Huffman Compression Queue Heap Stack Set Map Graph Sorting Algorithm Divide and Conquer Binary Search Math Knapscak Probability Bitmap 计算机网络 七层模型 物理层 数据链路层 网络层 传输层 应用层 Springboot学习之路 xxx python docs-zh-CN 托管项目 chum neews 其他 春节12响 QS2019 关于作者 邮箱:zehaizhang@aliyun.com 博客:zehai.info CSDN:https://blog.csdn.net/ShancoFolia want Node.js Job,base Peking 墓志铭:restarting","raw":null,"content":null},{"title":"Redis","date":"2019-11-17T07:49:54.000Z","updated":"2021-07-27T07:09:41.868Z","comments":true,"path":"Redis/index.html","permalink":"http://zehai.info/Redis/index.html","excerpt":"","text":"首先感谢掘金@敖丙的《吊打面试官》系列的启发 whatredis是一个缓存,基于内存操作数据,算是数据库的小弟,帮助数据库挡掉一些经常查询的内容,避免扫描库(你要知道有些查询要关联很多表,虽然你可能只查一条数据,但可能要要执行2-3秒,在高并发下是致命的),主要用的Redis以及Java的Memcached,两者各有特点,但市场倾向于Redis 知识点罗列Redis 数据结构: 类型 作用 示例 String 保存字符串 session Hash key-value 计数器 List 数组 数组类型数据 Set 去重数组 自动去重 SortedSet 去重排序数组 微博热搜榜单 HyperLog Geo Pub/Sub Redis modules 暴露接口自定义redis模块,自定义数据结构(json支持,对图数据库支持,匹配添加正则功等),访问redis数据空间,实现阻塞命令,动态链接加载模块,编写神经网络模块等 * 官方有文档,待学习,应用如:BloomFilter,redisSearch,redis-ML 内存清理待补充 分布式锁 目的:redis cluster时候保证一个数据同时只有一个实例在读/写 实现:zookeeper或者setnx争抢锁,expire释放,类似进程锁 持久化 持久方式 实现原理 应用场景 RDB() 全量 冷备份,耗时 AOF 增量 实时增量(sync属性配置同步时间) 混合使用 全量启动,AOF恢复近期数据 寻找key 1.keys-会阻塞-无重复项 2.scan-不阻塞-会有重复项 异步队列实现 1.rpush生产,lpop消费 2.sleep稍后重试,blpop休息直到消息来 3.应用场景:曾经调用仓库系统发货,仓库系统库存1一分钟更新一次,所以将发货数据推入队列中 4.pub/sub可以实现一次生产多次消费,高级的MQ解决意外情况 5.延时队列,sortedset,时间戳做score,内容做key调用zadd生产,zrangebysccore获取N秒前消息轮询消费 pipeline 1.多次IO一次返回 2.压测 同步机制 1.主从同步 2.从从同步 集群 redis sentinal高可用,master宕机选新头儿 redis cluster 扩展性,多个实例 BloomFilter 布隆过滤器,常用用于避免缓存击穿 实现原理:二进制向量和随机映射函数 作用:检查元素是否在合集中 工作流程:布隆过滤器的原理是,当一个元素被加入集合时,通过K个散列函数将这个元素映射成一个位数组中的K个点,把它们置为1。检索时,我们只要看看这些点是不是都是1就(大约)知道集合中有没有它了:如果这些点有任何一个0,则被检元素一定不在;如果都是1,则被检元素很可能在。这就是布隆过滤器的基本思想。","raw":null,"content":null},{"title":"RocketMQ","date":"2019-05-06T09:30:12.000Z","updated":"2021-07-27T07:09:41.868Z","comments":true,"path":"RocketMQ/index.html","permalink":"http://zehai.info/RocketMQ/index.html","excerpt":"","text":"","raw":null,"content":null},{"title":"Schedule","date":"2019-04-06T14:57:15.000Z","updated":"2021-07-27T07:09:41.869Z","comments":true,"path":"Schedule/index.html","permalink":"http://zehai.info/Schedule/index.html","excerpt":"","text":"更新Interview中JS的去重 更新Structure中的二叉树","raw":null,"content":null},{"title":"System","date":"2019-03-14T09:20:32.000Z","updated":"2021-07-27T07:09:41.870Z","comments":true,"path":"System/index.html","permalink":"http://zehai.info/System/index.html","excerpt":"","text":"","raw":null,"content":null},{"title":"structure","date":"2019-03-14T09:21:10.000Z","updated":"2021-07-27T07:09:41.870Z","comments":true,"path":"Structure/index.html","permalink":"http://zehai.info/Structure/index.html","excerpt":"","text":"线性表LinkedList线性表=数组+链表 线性表中数据元素之间的关系是一对一的关系,即除了第一个和最后一个数据元素之外,其它数据元素都是首尾相接的。 顺序存储 链式存储 典型 数组 链表 物理连续性 连续 分开,靠指针桥接 插入、删除复杂度 O(n) O(1) 查找复杂度 O(1) O(n) 链表=单向链表+双向链表+循环链表等等(为了解决查找复杂度为O(n)的情况,即每次都要从头开始遍历,所以有了双向链表,进一步的把收尾连起来循环,是双向链表的一个进化,能更好的遍历以及利用空间) 12345struct ListNode { int val; ListNode *next; ListNode(int val,ListNode *next=NULL):val(val),next(next){}}; Queue 队列(默认单队列) 循环队列(通过取余,形成逻辑上闭环)rear = (rear - size) % size 不以 front = rear 为放满标志,改为 (rear - front) % size = 1 Java-Collection=set+list+queue Set(HashSet,TreeSet) 无重复元素的数组 HashSet 是哈希表结构,主要利用 HashMap 的 key 来存储元素,计算插入元素的 hashCode 来获取元素在集合中的位置; TreeSet 是红黑树结构,每一个元素都是树中的一个节点,插入的元素都会进行排序; ##List 在 List 中,用户可以精确控制列表中每个元素的插入位置,另外用户可以通过整数索引(列表中的位置)访问元素,并搜索列表中的元素。 与 Set 不同,List 通常允许重复的元素。 另外 List 是有序集合而 Set 是无序集合。 ArrayList:数组队列,动态,线程不安全 vector:矢量队列,和数组类似,线程安全 linkedList:双向链表 TreeBST二叉查找树:binary search tree 左子树上所有结点的值均小于或等于它的根结点的值 右子树上所有结点的值均大于或等于它的根结点的值 左、右子树也分别为二叉排序树(regression) 应用:二分查找O(logn),查找次数等于树的高度 缺点:新插入节点的时候,因为要旋转复杂度较高(引入红黑树的法则降低旋转) RBT(red black tree)原名:平衡二叉B树(symmetric binary B-trees) feature: 每个节点都只能是红色或者黑色 根节点是黑色 每个叶节点(NIL节点,空节点)是黑色的。 如果一个结点是红的,则它两个子节点都是黑的。也就是说在一条路径上不能出现相邻的两个红色结点。 从任一节点到其每个叶子的所有路径都包含相同数目的黑色节点。 红黑树带来的优势: 红黑树根到椰子的最长路径不会超过最短路径的2倍 插入或删除,通过feature来调整结构(有的时候不需要调整) 插入默认是红色,调整包括变色和旋转 应用于JDK,Collection中的TreeMap和TreeSet,HashMap(JDK1.8之后用,且阈值大于8时候才切换到红黑树,之前用的是拉链法)","raw":null,"content":null},{"title":"about","date":"2019-03-11T15:11:47.000Z","updated":"2021-07-27T07:09:41.903Z","comments":true,"path":"about/index.html","permalink":"http://zehai.info/about/index.html","excerpt":"","text":"联系方式 邮箱:zehaizhang@aliyun.com 个人信息 章泽海/男/1995 本科/北京城市学院-信息学院 工作年限:2年 技术博客:http://zehai.info Github:http://github.com/ShawnGoethe 期望职位:NodeJS程序员,数据分析 期望薪资:18k~24k 期望城市:北京 自我评价工作经历极客晨星 (2020年2月~至今)云丁科技 ( 2018年11月 ~ 2020年2月 )凌众时代 ( 2018年6月 ~ 2018年9月 )金山软件 ( 2017年12月 ~ 2018年5月 )internship 教育经历北京城市学院 (2014~ 2018)软件工程 统招","raw":null,"content":null},{"title":"categories","date":"2019-03-22T13:17:40.000Z","updated":"2021-07-27T07:09:41.903Z","comments":true,"path":"categories/index.html","permalink":"http://zehai.info/categories/index.html","excerpt":"","text":"","raw":null,"content":null},{"title":"QS2019","date":"2019-03-13T06:11:05.000Z","updated":"2021-07-27T07:09:42.283Z","comments":true,"path":"qs2019/index.html","permalink":"http://zehai.info/qs2019/index.html","excerpt":"","text":"top 18 No school_name 中文名 1 MIT 麻省理工学院 2 Stanford Uni 斯坦福大学 3 Harvard Uni 哈佛大学 4 California Institute of Technology 加州理工学院 5 Uni of Oxford 牛津大学 6 Uni of Cambridge 剑桥大学 7 ETH Zurich-Swiss Fedeal Institute of Technology 苏黎世联邦理工大学 8 Imperial College London 帝国理工学院 9 Uni of Chicago 芝加哥大学 10 Uni College London 伦敦大学学院 11 Nathonal Uni of Singapore 新加坡国立大学 12 Nanyang Technological Uni,Singapore 新加坡南洋理工大学 13 Princeton Uni 普林斯顿大学 14 Comell Uni 康奈尔大学 15 Yale Uni 耶鲁大学 16 Columbia Uni 哥伦比亚大学 17 Tsinghua Uni 清华大学 18 The uni of Edinburgh 爱丁堡大学 30 Peking Uni 北京大学 98 Uni of Science and Technology of China 中国科学技术大学","raw":null,"content":null},{"title":"tags","date":"2019-03-22T13:16:06.000Z","updated":"2021-07-27T07:09:42.284Z","comments":true,"path":"tags/index.html","permalink":"http://zehai.info/tags/index.html","excerpt":"","text":"","raw":null,"content":null},{"title":"MySQL","date":"2020-03-09T07:04:55.000Z","updated":"2021-07-27T07:09:41.866Z","comments":true,"path":"MySQL/index.html","permalink":"http://zehai.info/MySQL/index.html","excerpt":"","text":"免责声明:该文章个人翻译,仅做学习使用,可能存在翻译错误 全文重心15.6.2介绍了MYSQL的存储方式,对于我们了解数据库,更好地使用数据库提供了基础 The InnoDB Storage Engine15.1.1 Benefits of Using InnoDB Tables 15.1.2 Best Practices for InnoDB Tables 15.1.3 Verifying that InnoDB is the Default Storage Engine 15.1.4 Testing and Benchmarking with InnoDB 通用存储,默认使用InnoDB,如需更换引擎,创建表时ENGINE参数指定 高性能 高可靠 key Advantage DML遵循ACID规则(支持事务) Row-level locking(行锁),和Oracle风格的一致读取可提高多用户并发性和性能。 聚簇索引:InnoDB表将数据放在在磁盘上,来方便基于主键优化查询。每个InnoDB表都有一个称为聚集索引( clustered index)的主键索引,该索引组织数据以最小化主键查找的I / O。 外键约束 Feature Support B-tree indexes Yes Backup/point-in-time recovery (Implemented in the server, rather than in the storage engine.) Yes Cluster database support No Clustered indexes Yes Compressed data Yes Data caches Yes Encrypted data Yes (Implemented in the server via encryption functions; In MySQL 5.7 and later, data-at-rest tablespace encryption is supported.) Foreign key support Yes Full-text search indexes Yes (InnoDB support for FULLTEXT indexes is available in MySQL 5.6 and later.) Geospatial data type support Yes Geospatial indexing support Yes (InnoDB support for geospatial indexing is available in MySQL 5.7 and later.) Hash indexes No (InnoDB utilizes hash indexes internally for its Adaptive Hash Index feature.) Index caches Yes Locking granularity Row MVCC Yes Replication support (Implemented in the server, rather than in the storage engine.) Yes Storage limits 64TB T-tree indexes No Transactions Yes Update statistics for data dictionary Yes mysql8.0 innoDB enhancements InnoDB enhancements 15.1.1Benefits of InnoDB Tables 重新启动数据库(主动or意外)后都无需执行任何特殊操作 拥有缓冲池(buffer pool)经常访问的数据放到内存中处理 设置外键,更新或删除数据,并自动更新或删除其他表中的相关数据。 数据损坏提示 设计正确的主键,会自动优化,使得where,order,group迅速 CRUD通过自动机制(change buffering)可以对同一张表并发读写(行锁的优势),以及缓存CRUD后一起写入减少磁盘IO 慢查询性能优异,自适应哈希索引(Adaptive Hash Index)使得一行被多次访问时,读取更快,就像hash一样 支持压缩表和关联索引(associated indexes) 支持创建和删除索引,而对性能和可用性影响较低 Truncating a file-per-table tablespace is very fast, and can free up disk space for the operating system to reuse, rather than freeing up space within the system tablespace that only InnoDB can reuse.(不知道在说什么) DYNAMIC格式解决BLOB和长文本的存储问题 INFORMATION_SCHEMA监视存储引擎内部信息 通过查询性能架构表(performance schema)来监视存储的性能 兼容其他引擎的table 处理大量数据时提高CUP效率,获得最佳性能 支持处理大量数据 15.1.2 Best Practices for InnoDB Tables 合理设置主键或者设置自动增量为主键 15.2 InnoDB and the ACID Model15.3 InnoDB Multi-Versioning15.4 InnoDB Architecture15.5.1 Buffer Pool 15.5.2 Change Buffer 15.5.3 Adaptive Hash Index 15.5.4 Log Buffer 图示↓ 15.5.1 Buffer Pool设置buffer pool 目的:提高大容量读取操作的效率,Pool 分为多个页,容纳多行数据,页遵循LRU:latest recently used替换原则 缓冲池列表↓ LRU使得常用的页在new sublist部分,oldsublist是不常用,其中的页会被替换 默认配置下算法具体完成 old sublist占据3/8 midpoint 是新旧的边界 新页插入midpoint位置(old头部)且可以被读(因为是用户启动的操作,例如sql query),或者加载预读 对old区的页操作,用户主动操作会使得页移向new区域,预读操作则不会 不断更新,old末位淘汰 默认情况下,用户主动操作dump操作也会把数据加载到pool中,尽管这些数据不在访问,以及预读仅访问一次的页面多次加载移到new的表头,慢慢淘汰,都存在问题 默认配置: 专机配置80%的内存作为buffer pool 将缓冲池划分,避免并发竞争 频繁访问的数据常驻内存 控制预读请求,异步将数据调到buffer pool 适当执行background flushing 配置innodb缓冲池备份,避免意外 监视buffer pool使用 SHOW ENGINE INNODB STATUS 访问monitor提供的缓冲池数据 Tpye name status InnoDB (见下) 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109=====================================2020-03-11 09:06:22 0x7f4ef0097700 INNODB MONITOR OUTPUT=====================================Per second averages calculated from the last 52 seconds-----------------BACKGROUND THREAD-----------------srv_master_thread loops: 7 srv_active, 0 srv_shutdown, 181415 srv_idlesrv_master_thread log flush and writes: 0----------SEMAPHORES----------OS WAIT ARRAY INFO: reservation count 3OS WAIT ARRAY INFO: signal count 3RW-shared spins 0, rounds 0, OS waits 0RW-excl spins 1, rounds 30, OS waits 0RW-sx spins 0, rounds 0, OS waits 0Spin rounds per wait: 0.00 RW-shared, 30.00 RW-excl, 0.00 RW-sx------------TRANSACTIONS------------Trx id counter 4410Purge done for trx's n:o < 4410 undo n:o < 0 state: running but idleHistory list length 11LIST OF TRANSACTIONS FOR EACH SESSION:---TRANSACTION 421452050332264, not started0 lock struct(s), heap size 1136, 0 row lock(s)---TRANSACTION 421452050331392, not started0 lock struct(s), heap size 1136, 0 row lock(s)--------FILE I/O--------I/O thread 0 state: waiting for completed aio requests (insert buffer thread)I/O thread 1 state: waiting for completed aio requests (log thread)I/O thread 2 state: waiting for completed aio requests (read thread)I/O thread 3 state: waiting for completed aio requests (read thread)I/O thread 4 state: waiting for completed aio requests (read thread)I/O thread 5 state: waiting for completed aio requests (read thread)I/O thread 6 state: waiting for completed aio requests (write thread)I/O thread 7 state: waiting for completed aio requests (write thread)I/O thread 8 state: waiting for completed aio requests (write thread)I/O thread 9 state: waiting for completed aio requests (write thread)Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] , ibuf aio reads:, log i/o's:, sync i/o's:Pending flushes (fsync) log: 0; buffer pool: 01129 OS file reads, 352 OS file writes, 117 OS fsyncs0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s-------------------------------------INSERT BUFFER AND ADAPTIVE HASH INDEX-------------------------------------Ibuf: size 1, free list len 0, seg size 2, 0 mergesmerged operations: insert 0, delete mark 0, delete 0discarded operations: insert 0, delete mark 0, delete 0Hash table size 34679, node heap has 0 buffer(s)Hash table size 34679, node heap has 1 buffer(s)Hash table size 34679, node heap has 0 buffer(s)Hash table size 34679, node heap has 0 buffer(s)Hash table size 34679, node heap has 1 buffer(s)Hash table size 34679, node heap has 0 buffer(s)Hash table size 34679, node heap has 1 buffer(s)Hash table size 34679, node heap has 4 buffer(s)0.37 hash searches/s, 0.25 non-hash searches/s---LOG---Log sequence number 34372703Log buffer assigned up to 34372703Log buffer completed up to 34372703Log written up to 34372703Log flushed up to 34372703Added dirty pages up to 34372703Pages flushed up to 34372703Last checkpoint at 3437270395 log i/o's done, 0.00 log i/o's/second----------------------BUFFER POOL AND MEMORY----------------------Total large memory allocated 137363456Dictionary memory allocated 464815Buffer pool size 8192Free buffers 6942Database pages 1243Old database pages 478Modified db pages 0Pending reads 0Pending writes: LRU 0, flush list 0, single page 0Pages made young 0, not young 00.00 youngs/s, 0.00 non-youngs/sPages read 1100, created 143, written 2140.00 reads/s, 0.00 creates/s, 0.00 writes/sBuffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/sLRU len: 1243, unzip_LRU len: 0I/O sum[0]:cur[0], unzip sum[0]:cur[0]--------------ROW OPERATIONS--------------0 queries inside InnoDB, 0 queries in queue0 read views open inside InnoDBProcess ID=1, Main thread ID=139976635500288 , state=sleepingNumber of rows inserted 33, updated 323, deleted 0, read 84710.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.65 reads/s----------------------------END OF INNODB MONITOR OUTPUT============================ 核心指标 Total large memory allocated 137363456//总内存Dictionary memory allocated 464815//data+index分配内存Buffer pool size 8192Free buffers 6942Database pages 1243Old database pages 478Modified db pages 0Pending reads 0Pending writes: LRU 0, flush list 0, single page 0Pages made young 0, not young 0//移到young区页数0.00 youngs/s, 0.00 non-youngs/s//移动速度Pages read 1100, created 143, written 2140.00 reads/s, 0.00 creates/s, 0.00 writes/s //命中率指标 Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 //预读指标 //预读速度,预读后无效页速度,随机预读速度 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s //LRU列表长度 LRU len: 1243, unzip_LRU len: 0I/O sum[0]:cur[0], unzip sum[0]:cur[0] 15.5.2 Change Bufferchange buffer是一种特殊的数据结构,页面不在缓冲池时,cashes会改变secondary index,当缓冲更改将在以后通过其他读操作将页面加载到缓冲池合并 15.5.3 Adptive Hash Index15.5.4 Log Buffer15.6.1 Tables15.6.1.1 Creating InnoDB Tablescreate table statement CREATE TABLE t1 (a INT, b CHAR (20), PRIMARY KEY (a)) ENGINE=InnoDB; 默认配置下,默认InnoDB,则无需指定,查询配置中默认引擎指令 SELECT @@default_storage_engine 在以下情况下需要使用ENGINE=InnoDB use mysqldump(A Database Backup Program) 复制到不是innodb的server上 innodb的表和索引创建在system tablespace or file-per-table tablespace or general tablespace。当启用innodb file per table(默认启用),innodb表会被隐式创建在读了的file-per-table space。相反的禁用会创建在innodb system tablespace.使用create table … tablespace 语法在general tablespace中创建表 当你在`file-per-table tablespace 中创建表的时候,MySQL默认会在数据目录下创建.db表空间文件。在Innodb system tablespace中创建的表,由已经存在的ibdata文件创建的,该文件在MySQL data目录中,在general tablespace 中创建的表,由已经存在的general tablespace .bd文件创建,该文件可以在data 目录的内部或者外部 内部中,innodb会将每个表的entry添加到data dictionary中。entry包括,database name。例如,在数据库中创建table t1 (in test database),datadictionary 就是test/t1,当你在别的database中创建同名表不会冲突。 InnoDB tables and Row Formats 默认通过innodb default row format配置项配置默认行格式,默认值DYNAMIC. Dynamic 和 Compressed行格式功能,比如 表压缩 efficient off-page storage of column values 该功能需要innodb file per table支持 12345678SET GLOBAL innodb_file_per_table=1;CREATE TABLE t3 (a INT, b CHAR (20), PRIMARY KEY (a)) ROW_FORMAT=DYNAMIC;CREATE TABLE t4 (a INT, b CHAR (20), PRIMARY KEY (a)) ROW_FORMAT=COMPRESSED;//or CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=DYNAMIC;//orCREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE = innodb_system ROW_FORMAT=DYNAMIC; InnoDB Tables Primary Keys 主键必须存在,并满足一个或多个条件: 经常索引 不为空 不重复 几乎不更新 12345# The value of ID can act like a pointer between related items in different tables.CREATE TABLE t5 (id INT AUTO_INCREMENT, b CHAR (20), PRIMARY KEY (id));# The primary key can consist of more than one column. Any autoinc column must come first.CREATE TABLE t6 (id INT AUTO_INCREMENT, a INT, b CHAR (20), PRIMARY KEY (id,a)); innodb talbe properties 123456789101112131415161718192021222324252627282930313233mysql> SHOW TABLE STATUS FROM test LIKE 't%' \\G;*************************** 1. row *************************** Name: t1 Engine: InnoDB Version: 10 Row_format: Compact Rows: 0 Avg_row_length: 0 Data_length: 16384Max_data_length: 0 Index_length: 0 Data_free: 0 Auto_increment: NULL Create_time: 2015-03-16 15:13:31 Update_time: NULL Check_time: NULL Collation: utf8mb4_0900_ai_ci Checksum: NULL Create_options: Comment: //ormysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_TABLES WHERE NAME='test/t1' \\G*************************** 1. row *************************** TABLE_ID: 45 NAME: test/t1 FLAG: 1 N_COLS: 5 SPACE: 35 ROW_FORMAT: CompactZIP_PAGE_SIZE: 0 SPACE_TYPE: Single 15.6.1.2 Creating Tables Externally创建外键原因: space management IO优化 把表放置在特定性能或者容量的存储设备上(?) 创建方式: 使用data directionary CREATE TABLE … TABLESPACE Syntax Creating a Table in an External General Tablespace Using the DATA DIRECTORY Clause CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = ‘/external/directory‘; 在file-per-talbe tablespaces中支持使用DATA DIRECTORY clause创建表。在file-per-table中启用innodb file per table将隐式创建表 123456mysql> SELECT @@innodb_file_per_table;+-------------------------+| @@innodb_file_per_table |+-------------------------+| 1 |+-------------------------+ External 1234567891011mysql> USE test;Database changedmysql> CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/external/directory';# MySQL creates the table's data file in a schema directory # under the external directoryshell> cd /external/directory/testshell> lst1.ibd 使用须知 external时候要确保innodb知道该目录 (待补充) Using CREATE TABLE … TABLESPACE Syntax 12CREATE TABLE t2 (c1 INT PRIMARY KEY) TABLESPACE = innodb_file_per_table DATA DIRECTORY = '/external/directory'; 这个方法仅仅可以用在file-per-talbe tablespaces中创建的表,并不需要enable innodb file per talbe,在其他方面,这个方法等效于create table …data directory方法,适用相同使用须知 Creating a Table in an External General Tablespace 可以在general tablespaces中依靠external directory创建表 15.6.1.3 Importing InnoDB Tables15.6.1.4 Moving or Copying InnoDB Tables15.6.1.5 Converting Tables from MyISAM to InnoDB15.6.1.6 AUTO_INCREMENT Handling in InnoDB15.6.2 Indexes15.6.2.1 Clustered and Secondary Indexes 该章节受面试官欢迎 innodb采用聚簇索引,存储行数据,聚簇索引与主键同义(Synonymos) 主键(或自增列)会用作聚簇索引 如果没有主键则mysql会在所有列都不为null的情况下,给unique索引用作聚簇索引 如果以上都没有,会内部生成名为GEN_CLUST_INDEX的隐藏聚集索引 why更快 聚簇索引访问行是快速的,因为可以直接导航到所有行数据的页,如果页很大,聚簇索引相对于其他会减少IO(其他的引擎使用不同的页面来存储行数据和索引) 两者相关 在非聚簇索引中,每条记录包含主键以及指向非聚簇索引的列(个人理解,除了存储主键,主键还需要存储地址去指向row) 在聚簇索引中,利用主键的值取寻找这条记录 如果主键长则非聚簇索引使用更多空间,因此它适合使用短主键 15.6.2.2 The Physical Structure of an InnoDB Index除了空间索引(spatial indexes),innoDB索引都是B-tree 空间索引使用R-trees(索引多维数据的专用数据结构) 索引记录存储在B树或者R树,数据结构中的叶子节点中的页中,索引页默认16K 当新的记录插入到聚簇索引中时,innodb尝试空闲页1/16供将来插入和更新索引记录 顺序插入则所得索引页约为15/16装满 乱序插入则页面容量为1/2 to 15/16 InnoDB在创建或者重建Btree时执行批量加载,成为排序索引构建(sorted index build)通过innodb fill factor 配置项定义在排序索引构建期间填充每个B-Tree页面上的空间百分比,剩余的空间将来索引增长使用 15.6.2.3 Sorted Index Builds 15.6.2.4 InnoDB FULLTEXT Indexes15.6.3 Tablespaces15.6.3.1 The System Tablespace15.6.3.2 File-Per-Table Tablespaces15.6.3.3 General Tablespaces15.6.3.4 Undo Tablespaces15.6.3.5 Temporary Tablespaces15.6.3.6 Moving Tablespace Files While the Server is Offline###15.6.4 Doublewrite Buffer 15.6.5 Redo Log15.6.6 Undo Logs](https://dev.mysql.com/doc/refman/8.0/en/innodb-fulltext-index.html)","raw":null,"content":null},{"title":"SpringBoot","date":"2019-12-12T02:31:48.000Z","updated":"2021-07-27T07:09:41.869Z","comments":true,"path":"SpringBoot/index.html","permalink":"http://zehai.info/SpringBoot/index.html","excerpt":"","text":"[TOC] Getting Start1.介绍(我们牛逼,开箱即用) 2.开始1.spring-boot精髓之处就是简化了spring的配置, Spring Boot依赖项使用org.springframework.boot,继承自spring-boot-starter-parent,另外还支持gradle(待补充),下面示例中的注解可以详细看一下 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>myproject</artifactId> <version>0.0.1-SNAPSHOT</version> <!-- Inherit defaults from Spring Boot --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> </parent> <!-- Override inherited settings --> <description/> <developers> <developer/> </developers> <licenses> <license/> </licenses> <scm> <url/> </scm> <url/> <!-- Add typical dependencies for a web application --> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> <!-- Package as an executable jar --> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build></project> 2.groupId和artifactId(待补充) 3. @RestController来标记控制器(Controller)一个是注释的作用,一个是告诉编译器,处理web请求的时候考虑一下我。 @RequestMapping做路由映射⁄(⁄ ⁄•⁄ω⁄•⁄ ⁄)⁄,并返回string类型。 @EnableAutoConfiguration二级注解,这个注释告诉Spring Boot根据所添加的jar依赖关系“猜测”您如何配置Spring。由于spring-boot-starter-web添加了Tomcat和Spring MVC,因此自动配置假定您正在开发Web应用程序并相应地设置Spring。 123456789101112131415161718import org.springframework.boot.*;import org.springframework.boot.autoconfigure.*;import org.springframework.web.bind.annotation.*;@RestController@EnableAutoConfigurationpublic class Example { @RequestMapping("/") String home() { return "Hello World!"; } public static void main(String[] args) { SpringApplication.run(Example.class, args); }} 4.Main方法 这是遵循Java约定的应用程序入口点的标准方法。我们的主要方法通过调用run来启动Spring Boot的SpringApplication类。SpringApplication会引导我们的应用程序,并启动Spring,并且又会启动自动配置的Tomcat Web服务器。将Example.class作为参数传递给run方法,以告诉SpringApplication哪个是主要的Spring组件。args数组也通过传递以公开任何命令行参数。 5.创建可执行的jar(把程序打包成jar) 12345678<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins></build> 开始打包:mvn package 看细节:jar tvf target/myproject-0.0.1-SNAPSHOT.jar 原始jar:myproject-0.0.1-SNAPSHOT.jar.origin 6.启动 我将提供一个我在debug的项目供大家调试,https://github.com/ShawnGoethe/money Application.java配置成启动项,在IDEA中就可以启动了,在/hello目录下就可以看到返回的字符串了 3.使用3.1build 依赖管理:会自动升级除非指定依赖包本 maven:继承自spring-boot-starter-parent获得默认配置 序号 特性 1. Java 1.8为基础 2 UTF-8编码方式 3. 继承自spring-boot-dependencies 的A Dependency Management section可以省去version标签 4 An execution of the repackage goal with a repackage execution id. Sensible resource filtering. 5 智能插件配置 6 资源过滤 maven配置你的项目,从继承spring-boot-starter-parent开始 1234567891011<!-- Inherit defaults from Spring Boot --><parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version></parent><!--需要指定版本号,除了导入其他启动器--><properties> <!--覆盖parent中spring data的配置--> <spring-data-releasetrain.version>Fowler-SR2</spring-data-releasetrain.version></properties> 如果你不想继承parent配置呢,也可以使用公司的依赖,通过scope标签来保留依赖项目管理↓1234567891011121314151617181920212223242526272829303132<dependencyManagement> <dependencies> <dependency> <!-- Import dependency management from Spring Boot --> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>2.2.2.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies></dependencyManagement><!--or--><dependencyManagement> <dependencies> <!-- Override Spring Data release train provided by Spring Boot --> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-releasetrain</artifactId> <version>Fowler-SR2</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>2.2.2.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies></dependencyManagement> maven plugin目的:把你的项目打包成可执行的jar,类似npm里面的包 实现方式:pom中添加:(以下情况为默认配置) 12345678<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins></build> gradle(待补充) ant(待补充) starter(命名规则:spring-boot-starter-*) 名称 描述 spring-boot-starter 启动核心(包括自动配置,日志和YAML) spring-boot-starter-activemq Apache ActiveMQ spring-boot-starter-amqp Spring AMQP and Rabbit MQ spring-boot-starter-aop 使用AOP和AspectJ面向切片编程 spring-boot-starter-artemis Apache Artemis spring-boot-starter-batch 使用Spring Batch批处理 spring-boot-starter-cache Spring Framework’s caching spring-boot-starter-cloud-connectors spring-boot-starter-data-cassandra spring-boot-starter-data-cassandra-reactive spring-boot-starter-data-couchbase spring-boot-starter-data-couchbase-reactive spring-boot-starter-data-elasticsearch spring-boot-starter-data-jdbc Spring Data JDBC spring-boot-starter-data-jpa spring-boot-starter-data-ldap spring-boot-starter-data-mongodb spring-boot-starter-data-mongodb-reactive spring-boot-starter-data-neo4j spring-boot-starter-data-redis spring-boot-starter-data-redis-reactive spring-boot-starter-data-rest spring-boot-starter-data-solr spring-boot-starter-freemarker spring-boot-starter-groovy-templates spring-boot-starter-hateoas spring-boot-starter-integration spring-boot-starter-jdbc spring-boot-starter-jersey spring-boot-starter-jooq spring-boot-starter-json spring-boot-starter-jta-atomikos spring-boot-starter-jta-bitronix spring-boot-starter-mail spring-boot-starter-mustache spring-boot-starter-oauth2-client spring-boot-starter-oauth2-resource-server spring-boot-starter-quartz spring-boot-starter-rsocket spring-boot-starter-security spring-boot-starter-test spring-boot-starter-thymeleaf spring-boot-starter-validation spring-boot-starter-web WEB服务包括RESTful,spring MVC应用,继承tomct的容器 spring-boot-starter-web-services spring-boot-starter-webflux spring-boot-starter-websocket 除了上面的应用级别依赖,还有生产环境stater↓ 名称 描述 spring-boot-starter-actuator 帮助监控和管理生产应用程序 如果你想换一些技术可以参考↓ 名称 描述 spring-boot-starter-jetty spring-boot-starter-log4j2 spring-boot-starter-logging spring-boot-starter-reactor-netty spring-boot-starter-tomcat spring-boot-starter-undertow 3.2 结构化代码spring boot:我们很牛逼,不需要任何特定的代码布局就可以开始,但有些结构化可以对编程有帮助 3.2.1使用默认包当类不包含package声明的时候,将视该类在默认程序包中(虽然我们不推荐,它会导致@ComponentScan,@ConfigurationPropertiesScan,@EntityScan,@SpringBootApplication等一些问题) 3.2.2 main类我们建议将main文件放在应用的根目录,@SpringBootApplication的注解会在你的main类上方,它也会隐式定义一些搜索的基础功能,如写JPA的时候,@SpringBootApplication会帮你寻找@Entity注解,文件放在根目录,可以查到所有目录下的@Entity。 3.2.3 常用包tree12345678910111213141516com +- example +- myapplication +- Application.java | +- customer | +- Customer.java | +- CustomerController.java | +- CustomerService.java | +- CustomerRepository.java | +- order +- Order.java +- OrderController.java +- OrderService.java +- OrderRepository.java Application.java文件将用@SpringBootApplication注解main主方法12345678910111213package com.example.myapplication;import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication;@SpringBootApplicationpublic class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); }} 3.3 配置类SpringBoot支持基于Java的配置,尽可以使用XML格式的SpringApplication的配置,但我们仍然建议您的主要程序配置是一个简单的配置类,通常,main方法最好使用@Configuration 3.3.1 导入其他配置类不需要把所有配置放在单个配置类中,@Import注解可以帮助你导入其他配置类,另外,你可以使用@ComponentScan来自动获取所有Spring组件,包括@Configuration类 3.3.2 通过xml配置如果你必须使用XML,仍然建议从@Configuration配置类开始,然后你可以使用@ImporResource注解来加载xml配置文件 3.4 自动配置SpringBoot自动配置会尝试根据添加的依赖自动配置Spring应用程序,你需要通过@EnableAutoConfiguration或者@SpringBootApplication(仅能用一个)注解到配置类中 3.4.1逐渐取消自动配置自动配置是不智能的,我们建议您自定义配置来替换自动配置 3.4.2禁用特定的自动配置类通过@EnableAutoConfiguration中的exculde禁用,例如 12345678import org.springframework.boot.autoconfigure.*;import org.springframework.boot.autoconfigure.jdbc.*;import org.springframework.context.annotation.*;@Configuration(proxyBeanMethods = false)@EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})public class MyConfiguration {} 3.5 Spring beans 和依赖注入(dependencyInjection)你可以任意使用SpringFramework的技术来定义你的bean和注入依赖,如查找bean的@ComponetScan和构造注入的@Autowired 12345678910111213141516171819package com.example.service;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Service;@Servicepublic class DatabaseAccountService implements AccountService { //final后续无法修改 private final RiskAssessor riskAssessor; @Autowired //如果bean有了构造器,就不需要@Autowired public DatabaseAccountService(RiskAssessor riskAssessor) { this.riskAssessor = riskAssessor; } // ...} 3.6@SpringBootApplication 很受欢迎的一个注解,可以用于启用三个功能 @EnableAutoConfiguration @ComponentScan @Configuration 12345678910111213package com.example.myapplication;import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication;@SpringBootApplication // same as @Configuration @EnableAutoConfiguration @ComponentScanpublic class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); }} 3.7run3.7.1 IDE3.7.2 打包运行3.7.3 使用MavenPlugin3.7.4 使用GradlePlugin3.7.5热更新3.8 开发工具1234567<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <optional>true</optional> </dependency></dependencies> 3.9 打包","raw":null,"content":null},{"title":"Network","date":"2019-03-14T09:20:18.000Z","updated":"2021-07-27T07:09:41.904Z","comments":true,"path":"cnetwork/index.html","permalink":"http://zehai.info/cnetwork/index.html","excerpt":"","text":"基础概念 速率(比特率):在数字信道传送数据位数的速率,b/s,kb/s,Mb/s(mbps:Mb per second)(运营商常用,进制10进制),也可以理解为带宽的概念 存储容量,1Byte=8bit(进制1024) 时延:发送时延+传播时延+排队时延+处理时延 RTT:往返时延,发送开始到发送方收到接收方的ack,即2*传输时延,不含发送时延,处理时延等 信源:产生和发送数据的源头 信宿:接收数据的终点 信道:信号的传输媒介 单工同行:电报,一方只能发送,一方只能接受 半双工通信:对讲机,双方错开说话 全双工通信:视频电话,双方同时说 OSI 七层模型,TCPIP四层why解决互联网异构问题 what 序号 名称 TCP/IP 常见协议 作用 7 应用层 6 表示层 5 会话层 应用层 4 传输层 传输层 3 网络层 网际层 2 数据链路层 网络接口层 1 物理层 difference index OSI TCPIP 1 理论模型 实际应用 2-网络层区别 无连接+面向连接 无连接 3- 传输层区别 面向连接 无连接+面向连接 面向连接:TCP的建立连接,然后收发数据,断开连接 无连接:无需建立连接,直接发数据 五层参考模型就是将TCP/IP的网络接口层分成了:数据链路层和物理层,从底向上依次是:物理层,数据链路层,网络层,传输层,应用层 传输介质what传输介质是第0层,是物理层下面的载体 分类导向性传输介质:固体,如光纤,双绞线,同轴电缆(电视线) 非导向类传输介质:空气,真空,海水,等 物理层why解决计算机的传输媒体多样性的问题 what主机发送数据无需关心底层的介质是双绞线还是光纤,或者是华为家生产的,还是中兴家生产的。 类比就是你要发一个包裹,你可以选择四通一达以及顺丰,也不用管包裹走水陆空那一路,你只要负责发包裹,剩下的事情,物理层解决 概念 码元:固定时长的信号波形(数字脉冲),举个例子:二进制码元,0和1,四进制码元,00,01,10,11,八进制码元000,001,010,100,011,110,101,111,即,一个码元可以携带多个比特信息 码元传输速率:1s传输多少个码元,单位:波特(Baud) 信息传输速率:码元传输速率*n(1个码元有n个比特表示,如101,n=3),值就是带宽b/s 编码 解码 编码 名称 特点 二进制数据 非归零编码(NRZ) 简单,但无法检错,无法保持同步 归零编码(RZ) 一个码元内要恢复到零 曼彻斯特编码 下跳是1,上跳是0 差分曼彻斯特编码 同1异0 其他了解:反向不归零编码(NRZI),4B/5B编码 奈氏准则(奈奎斯特定理,传输极限,避免码间串扰)definition在理想环境下,为了避免码间串扰,极限码元传输速率为2W Baud,W为信道带宽,单位Hz 极限数据传输率 $$v=2W\\log_2 V(b.s)$$ W:带宽(Hz) V:几种码元/码元的离散电平数目 意义 码元传输速率有上限 带宽越宽,极限传输率越高 使用多元制的调制方法,提高码元携带的比特数 香农定理(信噪比)definition $$信噪比= \\frac{信号的平均功率}{噪声的平均功率}$$ 记作:S/N$$信噪比(dB)=10\\ln (S/N)$$单位:dB,分贝 以上是信噪比的两种表现形式 香农定理:解决了带宽受限有噪音的信道中,不产生误差,信息的数据传输率的上限(奈斯是理想环境) 传输速率 $$v=W\\log_2 (1+S/N)$$ 单位:b/s W:带宽Hz 题目如果说信噪比是1000,即S/N=1000带入,如果题目说信噪比30dB则带入公式求出S/N的值,再代入公式 意义 信噪比越大,极限传输速率越高 考题注意:题目可能让你求奈氏准则和香农定理的速率,取其最小值为极限速率设备(处理信号) 中继器:再生和还原数字信号 集线器:多口的中继器,放大转发,不能分割冲突域,平分带宽 数据链路层definition数据链路层将网络层的数据可靠的传输到相邻接点的目标及网络层 作用:加强物理层传输原始比特流的功能,将物理层提供的可能出错的连接改为在逻辑上无差错的数据链路 简单来说:数据链路层给报文编号,可以进行流量控制,丢失重发等 作用 提供:无确认无连接,有确认无连接,有确认面向连接服务 链路管理,连接的建立、维持、释放 组帧 流量控制 差错控制 组帧在网络层的数据头尾添加:帧首部,帧尾部,让对方说识别帧的开始和结束 组帧的四种方法:(Transmission) 字符计数法:帧首部第一字节记录帧的长度(传输过程中,这个字符可能被修改,或者丢失,后续帧全部错误) 字符填充法:帧首部填充SOH(00000001),尾部填充EOT(00000100)(数据部分有可能有SOH,EOT—>解决方式,转义字符) 零比特填充法:比如首部01111110,则数据部分遇到五个1填充一个0,来避免假SOH 违规编码法:如曼彻斯特会有,高-低–>1,低-高–>0,然后用低低和高高来做SOH,EOT 最大传输单元MTUwhy因为一个帧中IP数据包的内容不可能无限大 whatMTU最大传送单元 差错控制差错分为两种: 位错:比特位数字出错 修正位错: 检错编码:奇偶校验,CRC循环冗余码(只知道错了) 纠错编码:海明码(知道错了,还知道错哪儿了) 帧错:丢失,重复,失序 修复帧错: 重传 冗余编码:数据放松之前,附加冗余位,使之符合某种规则,接收端检查不符合规则就判断为出错(即,奇偶校验,CRC,海明码) 奇偶校验码(n-1位信息元,1位校验元) 奇校验码:1的个数为奇数 偶校验码:1的个数为偶数 例:1100101奇校验–>11100101 “1”可以填到任意位置,只要有奇数个“1”,接收方就认为这一段没有出错,但如果丢了两个“1”就检查不错来了,所以出现了CRC CRC循环冗余要传的数据/多项式=商……余数 发送的数据=要传的数据+余数(FCS帧检验序列,又称冗余码) 例题:1101011011 生成多项式10011 要传的数据末尾加上4个0,除以10011余数1110,则发送的数据为:1101011011 1110 检查:11010110111110%10011==0则帧没有出错,否则丢弃 FCS的生成及校验是硬件实现,处理迅速,不会延迟 海明码1.发现双比特错,纠正但比特错 2.工作原理:牵一发而动全身 3.确定校验码位数r–>确定校验码和数据的位置–>求出校验码的值–>检错并纠错 1.去顶校验码位数r海明不等式:2^r≥k+r+1(r为冗余信息位,k为信息位) 例如:要发送的数据D=101101 数据位数k=6,满足不等式的最小r为4,也就是海明码有6+4=10位,即数据位6位,校验4位 2.确定校验码和数据的位置校验码放在2的几次方的位置,剩下填数据位就可以了 3.求校验码的值第一位校验码校验二进制最后一位为1的数 第二位校验码校验二进制倒数第二位为1的数字 以此类推 令校验位与选中的数异或为0,就可以得到校验位的值 得到101101的海明码是:0010011101(第1,2,4,8为为校验位) 4.检错并纠正取校验位做异或运算,得到的值就是出错的位置(上述例子中假设第五位出错,则四个校验码的值,拼起来的和就是5 流量控制和可靠传输机制what解决发送和接受能力不匹配的问题 difference 数据链路层 传输层 流量控制 点到点(相邻节点) 端到端 手段 收不下,不返回确认帧 接收端发送窗口公告 流量控制方法 停止-等待协议 滑动窗口协议: 回退N帧协议(GBN:go back N) 选择重传协议SR 停止-等待协议(等确认帧再发送)没发送完一个帧就停止发送,等待对方的确认,在收到确认后在发送下一个帧 丢帧重传时间(数据丢之或者ack丢失):>RTT 数据丢失重传 ack丢失,数据重传,接收方丢弃,重传ACK ack迟到,数据重传,接收方丢弃,重传ACK,收到第二次发的确认帧,后续收到迟到的ack丢掉 流水线技术:一次发送多帧(滑动窗口的起源) 滑动窗口协议(窗口多帧发送)采用累积确认 停止等待 GBN SR 发送窗口 1 >1 >1 接受窗口 1 1 >1 GBN特点 上层调用(发送或缓存网络层数据) 累计确认(ack为最后收到的帧的编号) 超时重传 (缺点,选择重传修正这个问题)接收方无缓存,延迟或者出错全部丢弃(如果1号收到2号丢失,3,4号陆续到了都丢弃,等待发送发超时重传2号帧) GBN滑动窗口的长度:$$1≤W≤2^n-1$$因为发送窗口过大,会使得对方区别新帧和旧帧,即编号的数目可能是固定的,他是循环利用的,可能会重复 选择重传协议(SR=GBK+接收方有窗口):缓存收到的帧,返回确认收到帧的编号(不代表编号前的帧都收到),窗口长度:$$1≤W≤2^n-1$$发送方窗口=接收方窗口(大了溢出,小了没意义) 信道利用率发送周期内,有效发送数据所占据的比例,也就是(发送数据帧时间)除以(发送数据帧开始到接收到ack的总时间)$$信道利用率=(L/C)/T$$L:T内发送L比特数据 C:发送方数据传输率 T:发送周期,发送到收到ack 信道吞吐率信道利用率* 发送方的发送速率 数据传输速率4kb/s,单向传播时延30ms,如果停止等待协议的信道最大利用率达到80%,数据帧长度为? $$0.8=\\frac{L/4}{L/4+2*30}$$ 信道划分介质访问(高效率利用传输介质)分两种 点对点链路:专有线路,如ppp协议 广播式链路:共享通信介质,如对讲机 介质访问控制 静态分配(不冲突) 频分复用FDM(Frequency Division Multiplexing) 时分复用TDM(time) 波分复用WDM(wave) 码分复用CDM(code) 动态分配 轮训访问: 令牌(不冲突) 随机访问:(冲突) aloha CSMA CSMA/CD CSMA/CA 1.统计时分复用STDM 提出原因:有的主机在这个时间片不会发送信息,信道造成浪费 通过集中器,ABCD四个人,集中器大小设定为3,每来3个人,就发送走一波数据 解决TDM平分带宽的问题,集中器的TDM帧可以发送的数据都是一个人的数据,从而不影响带宽 2.CDMA码分多址,是CDM的一种方式 CDM(后续补充,没听懂)1个比特分为多个chip(芯片/码片),每个站点被指定一个唯一的m位的chip序列 如何划分信道? 多个站点同时发送数据时候,要求各个站点芯片序列相互正交 多个站点接收数据的时候,数据在信道中被线性相加 ALOHA协议(想发就发)特点:不监听信道,不按时间片发送,随机重发(发的时候彼此不知道冲突,所以可能两个人都发送失败) ALOHA改进:时隙ALOHA协议,将时间分片,用户在时间片开始时刻同步接入网络信道,如果冲突,则下个时间片开始时刻再发送 CSMA协议家族(先听再发) CSMA:carrier sense multiple access CS:载波侦听:发送前检测 MA:多点接入 信道忙 1-坚持CSMA:一直监听到信道闲,冲突则等待随机时间再来一直监听 信道利用率高 两个站点都坚持,死锁 非坚持CSMA:等待随机时间后再监听 减少冲突可能性 信道利用率低 p-坚持CSMA:空闲以p概率传输,忙则以概率1-p等待下个时间片(不必深究),忙则等待随机时间再监听 减少冲突 信道利用率较高 发生冲突后可能会坚持把数据帧发完(提出CD协议) CDMA/CD(先听再说:边听边说) cd:collision detection碰撞检测 在CSMA基础上,发送数据时也监听信道,忙则停止发送–半双工网络 争用期/冲突窗口/碰撞窗口:2T,如果没有碰撞则这次发送不会有冲突 如何确定重传?截断二进制指数规避算法(待完善) 最小帧长(避免还没碰撞检测完,数据已经发送结束了):帧长>=2T*数据发送速率 以太网规定最短帧长64B,凡是小的都是无效帧,丢弃 CDMA/CA(先听再说,礼让说) CA:collision acoidance避免碰撞 应用于无线局域网的冲突 先检测信道是否空闲–> 空闲时发送RTS(request to send:发送端地址,接收端地址,发送持续时间),忙则等待–> 接收端收到RTS,响应CTS(clear to send),再次期间不会再响应别人的RTS–> 发送方收到CTS后,开始发送数据帧(同时预约信道:发送方告知预计传输时间,从而告知别的站点多久后重发)–> 接收端收到数据帧,采用CRC来检验数据,正确则响应ACK,如果丢失遵循上面的规避算法来确定推迟重发时间 CSMA/CD CSMA/CA 传输介质 有线 无线 载波检测方式 电压 能量检测,载波检测,能量波混合检测 冲突类型 检测冲突 避免冲突 相同点:先听再说,监听,冲突后,有限次重传机制 轮询访问介质访问控制轮询协议:主节点轮流和从属节点发送数据 轮询开销大 等待延迟 主节点故障 令牌传递协议: 令牌:一个特殊格式的MAC控制帧,不含任何信息 每个节点可以拿到令牌一段时间,发送数据 令牌开销大 等待延迟 单点故障 应用于环网 适用于负载重,通信量大的网络中 局域网(Local Area Network) 范围小 速度较快 延迟短,误码率低,可靠性高 共享 分布式控制,广播式通信,能广播和组播 星型拓扑,总线型拓扑(CSMA/CD,令牌总线产生逻辑环),环形拓扑(令牌环),属性拓扑 局域网分类:以太网,令牌环网,FDDI网,ATM网,无线局域网 数据链路层=逻辑链路层LLC+截止访问控制MAC层 LLC识别网络层协议并封装,知道如何处理ACK,为网络层提供:无确认无连接,面向连接,带确认连接,高速传送 MAC,帧的封装,拆封,帧的寻址识别,发送接收,链路管理,帧差错控制,屏蔽物理链路种类的差异性 以太网(Ethernet) 便宜 使用广泛 相对简单 速率较高 提供无连接,不可靠的服务: 无连接:无需握手 不可靠,没有编号,不确认,差错丢弃(传输层负责) 通过通信适配器通信:MAC地址,前24位代表厂家,后24位自己规定,常用6个十六进制数字表示 无线局域网广域网PPP协议:点对点协议,只支持全双工 简单:无需纠错,无需编号,无需流量控制 封装成帧:帧定界符 透明传输:异步线路字节填充,同步线路比特填充 多种网络层协议:封装IP数据包采用多种协议 多种类型链路:串并行,同异步,光电…… 差错检测:错丢弃 检测连接状态 最大传送单元:数据部分最大MTU 网络层地址协商 数据压缩协商 PPP组成的三个部分: 一个将IP数据包封装到串行链路(同异步串行)的方法 链路控制协议LCP:建立和维护数据链路连接 网络控制协议NCP:PPP支持多种网络层协议,对应NCP来配置,为网络层建立和配置逻辑连接 PPP帧格式 HDLC协议 高级数据链路控制:High-level data link control,是一个同步网上传输数据,面向比特的数据链路层协议 三种站: 主站 从站 复合站 PPP&HDLC共同点 全双工 透明传输 查错但不纠错 不同点: 不同点 PPP HDLC 面向 字节 比特 协议字段 有 没有 序号和ACK 无 有↓ 可靠性 不可靠 可靠 链路层设备 网桥:根据MAC的目的地址进行帧的转发和过滤(隔离冲突域) 过滤通信量,增大吞吐量 扩大物理范围 提高可靠性 互联不同物理层 交换机 网桥 透明网桥:以太网上的站点不知道所发送的帧经过了哪几个网桥,是一种热插拔设备–自学习(通过广播来学习转发表) 源路由网桥:把详细的最佳路由信息(路由最少\\时间最短)放在帧的首部——通过广播方式向目的站发送一个发送帧 以太网交换机(多接口网桥) 直通式:检查地址直接转发(延迟小,可靠性低,无法支持不同速率的端口交换) 存储转发式:将帧放入高速缓存,检查正确性,正确则转发,错误丢弃(延迟大,可靠性高,支持不同速率端口) 隔离冲突域 隔离广播域 物理层(中继器,集线器) × × 链路层(网桥,交换机) √ × 网络层(路由) √ √ 诀窍 广播域,0个路由1个广播域,1个路由2两个广播域 冲突域:链路层设备(交换机)有几根线就是几个冲突域 网络层功能把分组从源端传到目的端,为分组交换网上的不同主机提供通信服务 路由选择和分组转发(最佳路径OSPF) 异构网互联 控制拥塞 开环控制(静) 闭环控制(动态控制) 数据交换方式 电路交换(两端一根线直连) 报文交换 分组交换 有两种连接方式 数据报方式:无连接服务(无需建立连接,每个分组都有地址) 虚电路方式:连接服务(建立连接) 数据报服务 虚电路 建立连接 × √ 目的地址 每个分组都有 建立有,分组只有虚电路号 路由选择 每个分组独立进行路由选择转发 同一路径 分组顺序 不保证有序 有序 可靠性 不可靠通信,可靠性由主机保证 可靠性由网络保证 网络故障适应性 遇故障丢失,其他分组路径发生变化 所有经过此节点都丢包 差错处理和流量控制主机 控制,本身不保证 分组交换网负责或者主机负责 报文的分装应用层:报文 传输层:报文段 网络层:IP数据包,分组 数据链路层:帧 物理层:比特流 IP ip数据报格式ip数据报=首部+数据部分 ip 数据报分片MTU以太网最大MTU是1500字节 IP数据报第32-63位,标识(16)+标志(3)+片偏移(13) 标识(16):统一数据报的分片使用同一标识 标志(4):中间位DF,DF=1禁止分片,DF=0允许分片,最低位MF,MF=1后面还有分片,MF=0,后面没有分片了 片偏移:分片后相对位置,除了最后一个分片,其他都是8B的整倍数 总长度:单位1B 片偏移:单位是8B 首部长度:单位是4B NATip地址转化表,通过端口来实现地址映射 IP分类子网掩码无分类编址CIDR ARP(IP-MAC)广播ARP请求分组 源ip+目的ip+源MAC+目的MAC(全1) 单播ARP响应分组 ip+mac 如果A–>B经历5个路由,一共要使用6次ARP协议 DHCP(应用层协议,广播,基于UDP,CS架构)静态配置ip 动态配置ip–>DHCP协议 主机广播DHCP发现 DHCP服务器广播DHCP提供 主机广播DHCP请求 DHCP服务器广播DHCP确认 ICMPICMP支持主机或者路由器 差错报告 网络探寻 ICMP报错 含义 终点不可达 无法交付 源点抑制(已取消) 目标向源主机发送,发慢点 时间超过 TTL=0时,发送超时报文 参数错误 首部字段有问题 重定向 让主机重新选择路由 ICMP差错报告报文数据字段 不发送ICMP差错报文的情况 对本身的报错出错不再报错 第一分片报错,后去分片不报错 组播不报错 特殊地址不报错(0.0.0.0/127.0.0.1) ICMP询问报文: 回送请求和回答报文(ping) 时间戳请求和回答报文(时间同步和测量时间) ICMP应用 PING Traceroute:跟踪分组发送的路径,使用ICMP时间超过差错报文 IPv4(32bit) 不能使用的ip地址 网络号 主机号 作ip源地址 作IP目的地址 用途 全0 全0 √ × 默认路由 全0 特定值 × √ 表示本网内某个特定的主机 全1 全1 x √ 广播地址 特定值 全0 x x 网络地址,表示一个网络 特定值 全1 x √ 直接广播地址,对特定网络上的所有主机广播 127 除全0,1 √ √ 用于本地软件换回测试 另一种 地址范围 网段数 A类 10.0.0.0~10.255.255.255 1 B 172.16.0.0~172.31.255.255 16 C 192.168.0.0~192.168.255.255 256 ABC为专属内部网络地址 IPv6(128bit)首部40bit+有效负荷(≥64k) 网络层设备路由 静态路由算法 动态路由算法 全局性:链路状态路由算法OSPF(规模大) 分散性:距离向量路由算法RIP(规模小) 自制系统AS:在单一技术管理下的一组路由器(一个局域网内,自己管理自己的,要不然路由算法无法完成) 路由选择协议 内部网关协议IGP(AS内)RIP,OSPF 外部网关协议EGP(AS间)BGP RIP定义:一种分布式基于距离向量的路由选择协议,简单,维护自己到目的网络唯一最佳距离(跳数)记录 feature: 仅相邻交换信息 每30s更新路由表,180s无消息则判断邻居没了 故障发现慢(你发现旁边故障了,但邻居以为经过你就可以到达,然后你以为经过邻居,再经过你就可以到达,循环到,双方都变成16跳,才发现网络故障) OSPF(类似Dijkstra)分布式链路状态feature: 自治系统内广播(非RIP的相邻) 交换链路状态(费用,距离,时延,带宽等) 链路状态变化才更新 每隔30分钟刷新一次数据库中的链路状态 故障发现比较快 OSPF分区: BGPAS间通信,交换网络可达性信息,发生变化时更新 OPEN–>UPDATE–>KEEPALIVE–>NOTIFICATION 协议 RIP OSPF BGP 类型 内部 内部 外部 路由算法 距离-向量 链路状态 路径-向量 传递协议 udp ip TCP 路径选择 跳数最少 代价最低 较好,非最佳 交换节点 相邻 所有 相邻 交换内容 自身路由表 所有 首次整个路由表,非首次,变化内容 IP组播(D类地址) 单播 广播 组播(多播)基于UDP IGMP协议+组播路由选择协议 传输层 进程间逻辑通信 复用和分用 差错检测 TCP UDP 端口号(16bit) 服务端0-1023 服务端1024-49151 客户端49152-65535 service port FTP 21 TELNET 23 SMTP 25 DNS 53 TFTP 69 HTTP 80 SNMP 161 Socket=ip+port UDPFEATURE: 无连接 不保证可靠交付 面向报文 无拥塞控制 首部开销小8B,小于20B(TCP) UDP检验 TCPFEATURE 面向连接 点对点 可靠有序 全双工 面向字节流 序号 确认号:期望收到的序号 数据偏移 URG:紧急位,值为1时高优先级发送 ACK:确认位,连接建立后等于1 PSH:推送位,值为1时,接收方尽快交付给应用进程 RST:复位,必须释放连接 SYN:同步位,1,标明是一个连接请求/连接接受报文 FIN:释放连接 窗口:接受窗口,即允许发送方的数据量 校验和 紧急指针:指出URG=1时,紧急数据的字节数 三次握手 四次挥手 流量控制窗口控制待补充","raw":null,"content":null}],"posts":[{"title":"Leetcode233","slug":"2021-08-13-Leetcode233","date":"2021-08-13T03:54:52.000Z","updated":"2021-08-16T08:19:18.747Z","comments":true,"path":"2021/08/13/2021-08-13-Leetcode233/","link":"","permalink":"http://zehai.info/2021/08/13/2021-08-13-Leetcode233/","excerpt":"","text":"233. 数字 1 的个数难度困难303收藏分享切换为英文接收动态反馈 给定一个整数 n,计算所有小于等于 n 的非负整数中数字 1 出现的个数。 示例 1: 12输入:n = 13输出:6 示例 2: 12输入:n = 0输出:0 提示: 0 <= n <= 2 * 109 Solution拿到题目很奇怪,这道题暴力解答也就是个稍稍大于o(n)的复杂度,作为hard题目出现,只能说明暴力会超时 那么先回顾一下整体的超时代码段 1234567891011121314151617/** * @param {number} n * @return {number} */var countDigitOne = function (n) { let counter = 0; for (let i = n; i > 0; i--) { let num = i; while (num > 0) { if (num % 10 === 1) { counter++; } num = Math.floor(num / 10); } } return counter}; 超时之后我就开始找规律,个位1只会出现1次,十位只会出现10次,百位是100次,之后是数学方法就不解了(懒orz),答案的主要部分 1234for (let k = 0; n >= mulk; ++k) { ans += (Math.floor(n / (mulk * 10))) * mulk + Math.min(Math.max(n % (mulk * 10) - mulk + 1, 0), mulk); mulk *= 10; }","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Hard","slug":"Hard","permalink":"http://zehai.info/tags/Hard/"}]},{"title":"Leetcode133","slug":"2021-08-13-Leetcode133","date":"2021-08-13T03:48:19.000Z","updated":"2021-08-13T03:55:24.571Z","comments":true,"path":"2021/08/13/2021-08-13-Leetcode133/","link":"","permalink":"http://zehai.info/2021/08/13/2021-08-13-Leetcode133/","excerpt":"","text":"133. 克隆图难度中等390收藏分享切换为英文接收动态反馈 给你无向 连通 图中一个节点的引用,请你返回该图的 深拷贝(克隆)。 图中的每个节点都包含它的值 val(int) 和其邻居的列表(list[Node])。 Solutiondfs就可以解决 题目提供了索引就是val,所以map的key可以使用val来简化体积 12345678910111213141516171819202122232425262728293031/** * // Definition for a Node. * function Node(val, neighbors) { * this.val = val === undefined ? 0 : val; * this.neighbors = neighbors === undefined ? [] : neighbors; * }; *//** * @param {Node} node * @return {Node} */var cloneGraph = function (node) { if (!node) return; let isVisit = new Map(); //dfs const dfs = function (item) { const newNode = new Node(item.val); isVisit.set(item.val, newNode); // console.log('isVist===>',isVisit) // deep clone for (let neighbor of item.neighbors) { if (!isVisit.has(neighbor.val)) { dfs(neighbor); } newNode.neighbors.push(isVisit.get(neighbor.val)) } } dfs(node); return isVisit.get(node.val);};","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"LeetCode611","slug":"2021-08-04-LeetCode611","date":"2021-08-04T07:37:34.000Z","updated":"2021-08-04T07:50:16.771Z","comments":true,"path":"2021/08/04/2021-08-04-LeetCode611/","link":"","permalink":"http://zehai.info/2021/08/04/2021-08-04-LeetCode611/","excerpt":"","text":"611. 有效三角形的个数难度中等230收藏分享切换为英文接收动态反馈 给定一个包含非负整数的数组,你的任务是统计其中可以组成三角形三条边的三元组个数。 示例 1: 1234567输入: [2,2,3,4]输出: 3解释:有效的组合是: 2,3,4 (使用第一个 2)2,3,4 (使用第二个 2)2,2,3 注意: 数组长度不超过1000。 数组里整数的范围为 [0, 1000]。 Solution排序后根据,a+b>c 套两层for循环确定ab,理论上还可以再套一层for循环确定c,但是复杂度太高,达到o(n^3)的复杂度,我们可以通过二分法,找到a+bc的极限值,极限左边都是符合要求的数据,从而n降为lgn的复杂度 1234567891011121314151617181920212223242526/** * @param {number[]} nums * @return {number} */var triangleNumber = function(nums) { if (nums.length < 3) return 0; let ans = 0; nums = nums.sort(); for (let i = 0; i < nums.length ; i++) { for (let j = i + 1; j < nums.length ; j++) { // 二分 let left = j + 1, right = nums.length - 1, k = j; while (left <= right) { const mid = Math.floor((left + right) / 2); if (nums[mid] < nums[i] + nums[j]) { k = mid; left = mid + 1; } else { right = mid - 1; } } ans += k - j; } } return ans;};","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"Leetcode581","slug":"2021-08-03-Leetcode581","date":"2021-08-03T08:12:31.000Z","updated":"2021-08-03T08:17:25.973Z","comments":true,"path":"2021/08/03/2021-08-03-Leetcode581/","link":"","permalink":"http://zehai.info/2021/08/03/2021-08-03-Leetcode581/","excerpt":"","text":"581. 最短无序连续子数组难度中等609收藏分享切换为英文接收动态反馈 给你一个整数数组 nums ,你需要找出一个 连续子数组 ,如果对这个子数组进行升序排序,那么整个数组都会变为升序排序。 请你找出符合题意的 最短 子数组,并输出它的长度。 示例 1: 123输入:nums = [2,6,4,8,10,9,15]输出:5解释:你只需要对 [6, 4, 8, 10, 9] 进行升序排序,那么整个表都会变为升序排序。 示例 2: 12输入:nums = [1,2,3,4]输出:0 示例 3: 12输入:nums = [1]输出:0 提示: 1 <= nums.length <= 104 -105 <= nums[i] <= 105 进阶:你可以设计一个时间复杂度为 O(n) 的解决方案吗? Solution目前想到的方法 偏数学,找到最大最小值为界 排序找差别,确定子数组两边下标 排序后用二分(稍微提升) 123456789101112131415/** * @param {number[]} nums * @return {number} */var findUnsortedSubarray = function (nums) { let start, end = -1, point = 0; let max = -100000, min = 10000;//题目给的大小区间 while (point < nums.length) { // 找分界点 max > nums[point] ? end = point : max = nums[point] min < nums[nums.length - point - 1] ? start = nums.length - point - 1 : min = nums[nums.length - point - 1] point++; } return end === -1 ? 0 : end - start + 1;};","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"2021-08-02-Leetcode743","slug":"2021-08-02-Leetcode743","date":"2021-08-02T13:21:52.000Z","updated":"2021-08-02T15:05:45.408Z","comments":true,"path":"2021/08/02/2021-08-02-Leetcode743/","link":"","permalink":"http://zehai.info/2021/08/02/2021-08-02-Leetcode743/","excerpt":"","text":"743. 网络延迟时间难度中等371收藏分享切换为英文接收动态反馈 有 n 个网络节点,标记为 1 到 n。 给你一个列表 times,表示信号经过 有向 边的传递时间。 times[i] = (ui, vi, wi),其中 ui 是源节点,vi 是目标节点, wi 是一个信号从源节点传递到目标节点的时间。 现在,从某个节点 K 发出一个信号。需要多久才能使所有节点都收到信号?如果不能使所有节点收到信号,返回 -1 。 示例 1: 12输入:times = [[2,1,1],[2,3,1],[3,4,1]], n = 4, k = 2输出:2 示例 2: 12输入:times = [[1,2,1]], n = 2, k = 1输出:1 示例 3: 12输入:times = [[1,2,1]], n = 2, k = 2输出:-1 提示: 1 <= k <= n <= 100 1 <= times.length <= 6000 times[i].length == 3 1 <= ui, vi <= n ui != vi 0 <= wi <= 100 所有 (ui, vi) 对都 互不相同(即,不含重复边) Solution明显的思路, Dijkstra 123456789101112131415161718192021222324252627282930var networkDelayTime = function (times, n, k) { const INF = Number.MAX_SAFE_INTEGER;// max value const g = new Array(n).fill(INF).map(() => new Array(n).fill(INF)); for (const t of times) { const x = t[0] - 1, y = t[1] - 1; g[x][y] = t[2];// 赋值 } const dist = new Array(n).fill(INF);//distance dist[k - 1] = 0;//k 本身为0 const used = new Array(n).fill(false);//遍历标记 for (let i = 0; i < n; ++i) {//遍历每个顶点 let x = -1; for (let y = 0; y < n; ++y) { // 未标记过并且(x为-1 或者 y的距离小于x的距离) 准备需要更新的节点 if (!used[y] && (x === -1 || dist[y] < dist[x])) { x = y; } } used[x] = true;//遍历标记 for (let y = 0; y < n; ++y) { // k到x的最小值 dist[y] = Math.min(dist[y], dist[x] + g[x][y]); } } let ans = Math.max(...dist);//最小值的最大值 return ans === INF ? -1 : ans;};console.log(networkDelayTime([[2, 1, 1], [2, 3, 1], [3, 4, 1]], 4, 2))","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"mongodbTransaction","slug":"2021-08-02-mongodbTransaction","date":"2021-08-02T01:34:41.000Z","updated":"2021-08-02T01:36:53.496Z","comments":true,"path":"2021/08/02/2021-08-02-mongodbTransaction/","link":"","permalink":"http://zehai.info/2021/08/02/2021-08-02-mongodbTransaction/","excerpt":"","text":"前几天面试官说mongodb5出了事务,就赶紧来看看,毕竟业务层处理还是挺麻烦的,以前只支持操作的原子性","raw":null,"content":null,"categories":[{"name":"mongoDB","slug":"mongoDB","permalink":"http://zehai.info/categories/mongoDB/"}],"tags":[{"name":"transactions","slug":"transactions","permalink":"http://zehai.info/tags/transactions/"}]},{"title":"Leetcode144","slug":"2021-07-31-Leetcode144","date":"2021-07-31T15:49:38.000Z","updated":"2021-07-31T16:13:20.143Z","comments":true,"path":"2021/07/31/2021-07-31-Leetcode144/","link":"","permalink":"http://zehai.info/2021/07/31/2021-07-31-Leetcode144/","excerpt":"","text":"144. 二叉树的前序遍历总体思路递归,easy等级 1234567891011121314151617181920212223/** * Definition for a binary tree node. * function TreeNode(val, left, right) { * this.val = (val===undefined ? 0 : val) * this.left = (left===undefined ? null : left) * this.right = (right===undefined ? null : right) * } *//** * @param {TreeNode} root * @return {number[]} */var preorderTraversal = function (root) { const ans = []; recursion(root, ans); return ans};function recursion(root, ans) { if (root === null) return; ans.push(root.val); recursion(root.left, ans) recursion(root.right, ans)}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"second-minimum-node-in-a-binary-tree","slug":"2021-07-27-second-minimum-node-in-a-binary-tree","date":"2021-07-27T13:53:51.000Z","updated":"2021-07-31T16:13:27.295Z","comments":true,"path":"2021/07/27/2021-07-27-second-minimum-node-in-a-binary-tree/","link":"","permalink":"http://zehai.info/2021/07/27/2021-07-27-second-minimum-node-in-a-binary-tree/","excerpt":"","text":"Leetcode 671671. 二叉树中第二小的节点难度简单198收藏分享切换为英文接收动态反馈 给定一个非空特殊的二叉树,每个节点都是正数,并且每个节点的子节点数量只能为 2 或 0。如果一个节点有两个子节点的话,那么该节点的值等于两个子节点中较小的一个。 更正式地说,root.val = min(root.left.val, root.right.val) 总成立。 给出这样的一个二叉树,你需要输出所有节点中的第二小的值。如果第二小的值不存在的话,输出 -1 。 示例 1: 123输入:root = [2,2,5,null,null,5,7]输出:5解释:最小的值是 2 ,第二小的值是 5 。 示例 2: 123输入:root = [2,2,2]输出:-1解释:最小的值是 2, 但是不存在第二小的值。 提示: 树中节点数目在范围 [1, 25] 内 1 <= Node.val <= 231 - 1 对于树中每个节点 root.val == min(root.left.val, root.right.val) Solution如题意,父节点就是最小值,借鉴了答案,递归条件写错了 12345678910111213141516171819202122232425262728293031323334/** * Definition for a binary tree node. * function TreeNode(val, left, right) { * this.val = (val===undefined ? 0 : val) * this.left = (left===undefined ? null : left) * this.right = (right===undefined ? null : right) * } *//** * @param {TreeNode} root * @return {number} */var findSecondMinimumValue = function (root) { let ans = -1; const parentValue = root.val;//root value const dfs = (node) => { if (node === null) { return; } if (ans !== -1 && node.val >= ans) { return; } if (node.val > parentValue) { ans = node.val; } dfs(node.left); dfs(node.right); } dfs(root); return ans;};","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"1109. 航班预订统计","slug":"2021-07-26-航班预订统计","date":"2021-07-26T08:59:16.000Z","updated":"2021-07-27T14:51:25.175Z","comments":true,"path":"2021/07/26/2021-07-26-航班预订统计/","link":"","permalink":"http://zehai.info/2021/07/26/2021-07-26-%E8%88%AA%E7%8F%AD%E9%A2%84%E8%AE%A2%E7%BB%9F%E8%AE%A1/","excerpt":"","text":"#Leetcode-1109. 航班预订统计 难度中等157收藏分享切换为英文接收动态反馈 这里有 n 个航班,它们分别从 1 到 n 进行编号。 有一份航班预订表 bookings ,表中第 i 条预订记录 bookings[i] = [firsti, lasti, seatsi] 意味着在从 firsti 到 lasti (包含 firsti 和 lasti )的 每个航班 上预订了 seatsi 个座位。 请你返回一个长度为 n 的数组 answer,其中 answer[i] 是航班 i 上预订的座位总数。 示例 1: 123456789输入:bookings = [[1,2,10],[2,3,20],[2,5,25]], n = 5输出:[10,55,45,25,25]解释:航班编号 1 2 3 4 5预订记录 1 : 10 10预订记录 2 : 20 20预订记录 3 : 25 25 25 25总座位数: 10 55 45 25 25因此,answer = [10,55,45,25,25] 示例 2: 12345678输入:bookings = [[1,2,10],[2,2,15]], n = 2输出:[10,25]解释:航班编号 1 2预订记录 1 : 10 10预订记录 2 : 15总座位数: 10 25因此,answer = [10,25] 提示: 1 <= n <= 2 * 104 1 <= bookings.length <= 2 * 104 bookings[i].length == 3 1 <= firsti <= lasti <= n 1 <= seatsi <= 104 Solution题目到是很简单,主要做的就是一维数组做个累加,时间复杂度O(N^2) 12345678910111213var corpFlightBookings = function (bookings, n) { const answer = []; for (let i = 0; i < n; i++)answer.push(0); for (let item of bookings) { const [first, last, seats] = item; for (let j = first; j <= last; j++) { answer[j-1] += seats; } } return answer;};console.log(corpFlightBookings([[1, 2, 10], [2, 3, 20], [2, 5, 25]], 5)) 但是时间只超过了50%,就考虑问题 问题应该在O^2的复杂度上 后来想遍历的时候 Array(n).fill(0) 代码更优美 把增量加在数组里,最后走for循环跑一次就可以降一层for循环 123const [first, last, seats] = item;answer[first - 1] += seats;if (last < n) answer[last] -= seats;","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"Leetcode剑指offer53","slug":"2021-07-16-Leetcode剑指53","date":"2021-07-16T15:50:55.000Z","updated":"2021-07-31T16:13:38.672Z","comments":true,"path":"2021/07/16/2021-07-16-Leetcode剑指53/","link":"","permalink":"http://zehai.info/2021/07/16/2021-07-16-Leetcode%E5%89%91%E6%8C%8753/","excerpt":"","text":"LeetCode:剑指 Offer 53打开网页,突然看到日推是easy难度,本来想就几行代码的事情,弄完了就休息了,提交后–傻了眼–:cry:,居然只打败了6.27%的人,草率了 题目: 剑指 Offer 53 - I. 在排序数组中查找数字 I统计一个数字在排序数组中出现的次数。 示例 1: 输入: nums = [5,7,7,8,8,10], target = 8输出: 2示例 2: 输入: nums = [5,7,7,8,8,10], target = 6输出: 0 限制: 0 <= 数组长度 <= 50000 1234567891011121314151617// first commit// 执行用时:2 ms, 在所有 Java 提交中击败了6.27%的用户// 内存消耗:40.8 MB, 在所有 Java 提交中击败了98.84%的用户class Solution { public int search(int[] nums, int target) { if(nums==null||nums.length<0){ return 0; } int count = 0; for (Integer num : nums) { if(num>target)break; if (target == num) count++; } return count; }} 找到了耗时1ms的答案一看,是foreach替换成了for循环 果然下标访问更快一些 以下贴一个最优解: 12345678910111213141516171819202122232425class Solution { public int search(int[] nums, int target) { // 就是left right 更快的定位,总体复杂度差不多,不过 int left = getRight(nums,target-1); int right = getRight(nums,target); return right-left; } public int getRight(int[] nums ,int target){ int left = 0; int right = nums.length-1; while(left<=right){ int mid = (left+right)/2; if(nums[mid]>target){ right = mid-1; } else if(nums[mid]<=target){ left = mid + 1; } } return left; } }","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"prototype","slug":"2021-03-07-prototype","date":"2021-03-07T06:55:02.000Z","updated":"2021-07-27T07:09:41.900Z","comments":true,"path":"2021/03/07/2021-03-07-prototype/","link":"","permalink":"http://zehai.info/2021/03/07/2021-03-07-prototype/","excerpt":"","text":"Prototype含义:proto(/ˈproʊtə/)原始, 原型, 原始的 目的:补充JavaScript对于对象的支持,通过protype来实现class中的method 过程:熟悉实例对象构造函数原型 三者之间的关系 1234567//构造函数 创建对象function Dog() {}var dog = new Dog();//Person 为构造函数,person为实例对象dog.name = '柯基';console.log(dog.name) // 柯基 构造函数通过prototype访问原型(一个类的属性,对象都可以访问) 实例对象通过 _proto_ 访问原型 === 构造函数通过prototype访问原型(原型也有_proto_) 实例原型通过constructor访问构造函数(Dog=== Dog.prototype.constructor) 原型遵循向上原则,即找不到就不断向上(prototype)查询 原型因为不停延长形成链,称作原型链,但是 Object.prototype.__proto__ 的值为 null 跟 Object.prototype 没有原型 原型链大概实现了类(Class)以及继承(Extend)的问题,但它并不是复制,是建立一种关联,通过prototype/_proto_ 来访问其他对象的属性和方法,属于委托/借用 Extend一共分为6种 原型链继承 借用构造函数(经典继承) 组合继承 原型式继承 寄生式继承 寄生组合式继承 原型链继承123456789101112131415function Parent () { this.names = ['kevin', 'daisy'];}function Child () {}Child.prototype = new Parent();var child1 = new Child();child1.names.push('yayu');console.log(child1.names); // ["kevin", "daisy", "yayu"]var child2 = new Child();console.log(child2.names); // ["kevin", "daisy", "yayu"] 问题: 属性被所有child共享 创建child实例时,不能向parent传参 借用构造函数123456789101112131415function Parent (name) { this.name = name;}function Child (name) { Parent.call(this, name);}var child1 = new Child('kevin');console.log(child1.name); // kevinvar child2 = new Child('daisy');console.log(child2.name); // daisy 有点: 避免引用类型的属性被所有实例共享 可以在Child中间parent传参 缺点: 方法在构造函数中定义,每次创建势力都会创建一遍方法 组合继承以上两种方法的组合,为最常用的继承方式 1234567891011121314151617181920212223242526272829function Parent (name) { this.name = name; this.colors = ['red', 'blue', 'green'];}Parent.prototype.getName = function () { console.log(this.name)}function Child (name, age) { Parent.call(this, name); this.age = age;}Child.prototype = new Parent();Child.prototype.constructor = Child;var child1 = new Child('kevin', '18');child1.colors.push('black');console.log(child1.name); // kevinconsole.log(child1.age); // 18console.log(child1.colors); // ["red", "blue", "green", "black"]var child2 = new Child('daisy', '20');console.log(child2.name); // daisyconsole.log(child2.age); // 20console.log(child2.colors); // ["red", "blue", "green"]","raw":null,"content":null,"categories":[{"name":"JavaScript","slug":"JavaScript","permalink":"http://zehai.info/categories/JavaScript/"}],"tags":[{"name":"prototype","slug":"prototype","permalink":"http://zehai.info/tags/prototype/"}]},{"title":"dockerMysql","slug":"2021-02-10-dockerMysql","date":"2021-02-10T06:54:33.000Z","updated":"2021-07-27T07:09:41.899Z","comments":true,"path":"2021/02/10/2021-02-10-dockerMysql/","link":"","permalink":"http://zehai.info/2021/02/10/2021-02-10-dockerMysql/","excerpt":"","text":"link:mysql-docker 支持标签 8.0.23, 8.0, 8, latest 5.7.33, 5.7, 5 5.6.51, 5.6 快速手册 issues: https://github.com/docker-library/mysql/issues 支持平台: (more info) amd64 发布image 详情: repo-info repo’s repos/mysql/ directory (history) (image metadata, transfer size, etc) Image 更新: official-images repo’s library/mysql labelofficial-images repo’s library/mysql file (history) 描述来源: docs repo’s mysql/ directory (history) 什么是 MySQL?MySQL 是最受欢迎的,开源的数据库. 凭借被验证过的性能表现,可靠性,易用性, MySQL已经成为基于web的应用程序的 主要选择,包括完整得个人项目和网站项目(电子上午,信息服务),也包括优秀的Facebook Facebook, Twitter, YouTube, Yahoo! 如何使用mysql image创建 mysql 服务实例启动 MySQL 比较简单: 1$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag some-mysql 容器名称 my-secret-pw 是root账户的密码 tag 是mysql的版本 通过mysql命令行连接mysql以下命令可以启动mysql容器并运行终端,执行SQL语句 1$ docker run -it --network some-network --rm mysql mysql -hsome-mysql -uexample-user -p some-mysql 容器名称 some-network 连接网络(方便容器间访问) 也可以直接运行客户端,访问远程数据库 1$ docker run -it --rm mysql mysql -hsome.mysql.host -usome-mysql-user -p 更多命令请访问 MySQL documentation 使用docker stack 或docker-compose部署示例stack.yml 12345678910111213141516# Use root/example as user/password credentialsversion: '3.1'services: db: image: mysql command: --default-authentication-plugin=mysql_native_password restart: always environment: MYSQL_ROOT_PASSWORD: example adminer: image: adminer restart: always ports: - 8080:8080 docker stack deploy -c stack.yml mysql docker-compose -f stack.yml up 启动后, 访问 http://swarm-ip:8080, http://localhost:8080, or http://host-ip:8080 shell访问查看 MySQL 日志使用 docker exec 可以让你在容器内执行命令,命令如下 1$ docker exec -it some-mysql bash 容器日志: 1$ docker logs some-mysql 自定义 MySQL 配置文件mysql默认配置文件在 /etc/mysql/my.cnf, 也可能指定了额外文件如: /etc/mysql/conf.d or /etc/mysql/mysql.conf.d. 请检查mysqlimage本身的相关文件和目录以了解更多信息 如果 /my/custom/config-file.cnf 是你自定义的文件未知和名字, 你可以这样启动你的mysql 容器 1$ docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag 你会启动一个用你自定义配置 /etc/mysql/my.cnf and /etc/mysql/conf.d/config-file.cnf, 的mysql容器 不使用cnf 文件配置很多配置都可以传给 mysqld. 使你自定义容器而不需要 cnf 文件. 如当你想改变默认编码和排序规则,使用 UTF-8 (utf8mb4) 只需要执行如下命令: 1$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci 如果你想看到所有的配置项,只需要执行: 1$ docker run -it --rm mysql:tag --verbose --help 环境变量docker run 时,可以一个或多个参数进行配置. 不过需要注意,如果使用已经包含数据库的数据目录启动容器,以下变量不会产生影响,在启动时,任何预先存在的数据库都保持不变,任何之前存在的数据库在容器启动时将保持不变. MYSQL_ROOT_PASSWORD该变量是必须的,是root账户的密码. MYSQL_DATABASE该变量可选,允许在启动时,指定数据库的名称. 如果提供了用户名/密码,用户会被赋予超级权限. MYSQL_USER, MYSQL_PASSWORD可选变量,用于创建新用户和密码,用户将获得超级管理员权限,两个参数都是必须的. 注意:不需要使用该机制来创建root超级用户,默认使用 MYSQL_ROOT_PASSWORD 来创建密码 MYSQL_ALLOW_EMPTY_PASSWORD可选变量,设置非空值(如yes),允许root用户无密码启动容器. 注意: 除非你知道你在做什么,否则不建议设置为 yes ,因为这将使mysql实例完全不受保护,允许所有人获得完全的超级用户权限. MYSQL_RANDOM_ROOT_PASSWORD可选变量,设置非空值(如yes),使用pwgen , 为root用户随机生成密码 .密码将被打印. MYSQL_ONETIME_PASSWORD设置用户 初始化完成后过期,在首次登录时候强制修改密码. 任何非空值将激活这个配置,注意:仅支持5.6+版本,以下版本会报错 MYSQL_INITDB_SKIP_TZINFO默认,entrypoint脚本自动加载CONVERT_TZ()函数需要的时区数据,如果不需要,任何非空值都将禁用时区加载 Docker Secrets通过环境变量传递敏感信息,还有另一种方法, _FILE 可以附加到前面列的环境变量,使得可以从文件中的变量初始化脚本,特别是,这可以用于存在/run/secrets/<secret_name>中的docker screts从加载密码, 如 : 1$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mysql:tag 目前仅支持: MYSQL_ROOT_PASSWORD, MYSQL_ROOT_HOST, MYSQL_DATABASE, MYSQL_USER, 和MYSQL_PASSWORD. 初始化新实例刚启动容器,指定名字的新数据库会被创建,并且根据提供的变量初始化. 此外,它将执行扩展名为.sh, .sql 和 .sql.gz ( /docker-entrypoint-initdb.d文件夹中).文件将按照字幕顺序执行. 你可以轻松使用dump备份填充,. 默认情况下,sql文件将被保存在 MYSQL_DATABASE 指定的数据库中. Caveats//告诫数据存储在哪里重要内容:有几种方式在容器运行时存储数据. 我们推荐 mysql 用户熟悉可用的选项,包括: 让docker使用自己的内部volume 将数据库文件写入主机系统上的磁盘(而不在容器内)从而管理数据库数据的存储。这也是默认的配置,也非常简单透明。缺点是相比直接部署找文件困难. 在主机上创建一个数据目录,并将其装载到容器内部的一个目录中,使得数据库文件放置在主机已知的位置上,更轻松访问文件,缺点是需要确保目录存在,且有权限和安全机制 Docker 文档是理解不同存储选项和变量的最好起步,并且有很多博客论坛讨论并提供建议,我们将简单展示基本过程: 创建文件夹在主机如 /my/own/datadir. 启动 mysql 容器 1$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag -v /my/own/datadir:/var/lib/mysql 将 /my/own/datadir 目录从主机装入容器中作为 /var/lib/mysql 默认不传-v情况下,mysql将写入其他数据文件. 直到初始化完成才有连接如果容器启动没有初始化数据库,则创建默认数据库. 初始化完成之前不会接受传入连接. 在使用自动化工具如 docker-compose同时启动多个容器时,这可能会导致问题. 如果应用尝试连接不提供服务的mysql,需要继续重试等待连接成功. 官方示例, 详见 WordPress or Bonita. 现用数据库使用如果在一个有mysql数据目录的volume启动mqsql,应省略 $MYSQL_ROOT_PASSWORD命令; 及时填写也不会生效, 且不会更改预先存在的数据库. 以任意用户身份运行如果你正确设置了目录权限,或者你需要使用特定的uid/gid运行mysqld,则可以通过 --user 设为任意值(root/0外)来实现所需的权限/配置: 1234$ mkdir data$ ls -lnd datadrwxr-xr-x 2 1000 1000 4096 Aug 27 15:54 data$ docker run -v "$PWD/data":/var/lib/mysql --user 1000:1000 --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag 创建备份大多数工具都会正常工作,尽管他们的使用在某些情况下可能有点复杂, 以确保可以访问mysqld服务器,确保这一点的一个简单方法是使用 docker exec 并从同一容器运行工具,如: 1$ docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql 从备份还原数据1$ docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sqlwith any relevant licenses for all software contained within.1","raw":null,"content":null,"categories":[{"name":"docker","slug":"docker","permalink":"http://zehai.info/categories/docker/"}],"tags":[{"name":"mysql","slug":"mysql","permalink":"http://zehai.info/tags/mysql/"}]},{"title":"currying","slug":"2021-01-15-currying","date":"2021-01-15T06:54:00.000Z","updated":"2021-07-27T07:09:41.899Z","comments":true,"path":"2021/01/15/2021-01-15-currying/","link":"","permalink":"http://zehai.info/2021/01/15/2021-01-15-currying/","excerpt":"","text":"柯里化是一种将使用多个参数的一个函数转换成一系列使用一个参数的函数的技术 12345678910function add(a, b) { return a + b;}// 执行 add 函数,一次传入两个参数即可add(1, 2) // 3// 假设有一个 curry 函数可以做到柯里化var addCurry = curry(add);addCurry(1)(2) // 3","raw":null,"content":null,"categories":[{"name":"JavaScript","slug":"JavaScript","permalink":"http://zehai.info/categories/JavaScript/"}],"tags":[{"name":"currying","slug":"currying","permalink":"http://zehai.info/tags/currying/"}]},{"title":"closure","slug":"2021-01-08-closure","date":"2021-01-08T06:53:22.000Z","updated":"2021-07-27T07:09:41.899Z","comments":true,"path":"2021/01/08/2021-01-08-closure/","link":"","permalink":"http://zehai.info/2021/01/08/2021-01-08-closure/","excerpt":"","text":"前置知识:JavaScript是静态作用域 闭包:访问自由变量的函数 1234567var a = 1;//既不是foo的局部变量,也不是foo函数的参数,a为自由变量function foo() { console.log(a);}foo();//1 即使上下文被销毁,它仍然存在,因为在作用域链上被引用了,是js的一个特性,目前如PHP,Java不会原生支持 面试题 常见的新手面试题,我遇到过好几次(作用域+闭包考点) 123456789101112131415161718192021222324252627var data = [];for (var i = 0; i < 3; i++) { data[i] = function () { console.log(i); };}data[0]();data[1]();data[2]();//closure var data = [];for (var i = 0; i < 3; i++) { data[i] = (function (i) { return function(){ console.log(i); } })(i);}data[0]();//不用找global的idata[1]();data[2]();","raw":null,"content":null,"categories":[{"name":"JavaScript","slug":"JavaScript","permalink":"http://zehai.info/categories/JavaScript/"}],"tags":[{"name":"closure","slug":"closure","permalink":"http://zehai.info/tags/closure/"}]},{"title":"vscode访问服务器文件","slug":"2021-01-05-vscoderemote","date":"2021-01-05T06:49:27.000Z","updated":"2021-07-27T07:09:41.899Z","comments":true,"path":"2021/01/05/2021-01-05-vscoderemote/","link":"","permalink":"http://zehai.info/2021/01/05/2021-01-05-vscoderemote/","excerpt":"","text":"1.install remote ssh in vscode 2.click remote explorer and select ssh targets 3.click remote ssh configure or press F1 and input remote-ssh:Open configuration file 4.selete path ~/.ssh/config,and modify config file if you dont have rsa ,please generate keys before 123456//optionalssh-keygen# passphrase can be empty and then generate keys in `~/.ssh`# put *.pub (public key) to your server (~/.ssh/) and excute `cat id_rsa.pub >> authorized_keys` to merge Previous file# now rsa keys are ready 1234567Host alias HostName 8.888.88.8 User root IdentityFile ~/.ssh/id_rsa RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no Host alias–>your remote server name hostName–>server ip User–>login username IdentityFile–>private key path RSAAuthentication–>optional PubkeyAuthentication–>optional PasswordAuthentication–>no password login 5.login without password ready","raw":null,"content":null,"categories":[{"name":"skills","slug":"skills","permalink":"http://zehai.info/categories/skills/"}],"tags":[{"name":"vscode","slug":"vscode","permalink":"http://zehai.info/tags/vscode/"}]},{"title":"2020-10-16-promise用法","slug":"2020-10-16-promise用法","date":"2020-10-16T10:42:09.000Z","updated":"2021-07-27T07:09:41.898Z","comments":true,"path":"2020/10/16/2020-10-16-promise用法/","link":"","permalink":"http://zehai.info/2020/10/16/2020-10-16-promise%E7%94%A8%E6%B3%95/","excerpt":"","text":"WhatECMAscript 6 原生提供了 Promise 对象。 Promise 对象代表了未来将要发生的事件,用来传递异步操作的消息。 Promise 对象有以下两个特点:1、对象的状态不受外界影响。Promise 对象代表一个异步操作,有三种状态: pending: 初始状态,不是成功或失败状态。 fulfilled: 意味着操作成功完成。 rejected: 意味着操作失败。 只有异步操作的结果,可以决定当前是哪一种状态,任何其他操作都无法改变这个状态。这也是 Promise 这个名字的由来,它的英语意思就是「承诺」,表示其他手段无法改变。 2、一旦状态改变,就不会再变,任何时候都可以得到这个结果。Promise 对象的状态改变,只有两种可能:从 Pending 变为 Resolved 和从 Pending 变为 Rejected。只要这两种情况发生,状态就凝固了,不会再变了,会一直保持这个结果。就算改变已经发生了,你再对 Promise 对象添加回调函数,也会立即得到这个结果。这与事件(Event)完全不同,事件的特点是,如果你错过了它,再去监听,是得不到结果的。 1234var promise = new Promise(function(resolve, reject) { // 异步处理 // 处理结束后、调用resolve 或 reject}); 以上来自:菜鸟https://www.runoob.com/w3cnote/javascript-promise-object.html Why因为在2020年01月07日有一篇文章讲了使用promise实现延时队列的一道面试题,因为之前写业务没有用到过所以一直以为用处不大,但今天对接阿里的录音文件识别转文字的接口中,示例代码是一个setInterval轮询得到结果的一种方式,但是他带来了一个很严重的问题 !!没有办法返回前端转文字的结果!! 大概代码如下 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051//url:https://help.aliyun.com/document_detail/94242.html?spm=a2c4g.11174283.6.601.15eb7275a8rq00// 这段代码会异步执行,可以得到结果,但是直接用这个代码返回给前端client.submitTask(taskParams, options).then((response) => { console.log(response); // 服务端响应信息的状态描述StatusText。 var statusText = response.StatusText; if (statusText != 'SUCCESS') { console.log('录音文件识别请求响应失败!') return; } console.log('录音文件识别请求响应成功!'); // 获取录音文件识别请求任务的TaskId,以供识别结果查询使用。 var taskId = response.TaskId; /** * 以TaskId为查询参数,提交识别结果查询请求。 * 以轮询的方式进行识别结果的查询,直到服务端返回的状态描述为"SUCCESS"、SUCCESS_WITH_NO_VALID_FRAGMENT, * 或者为错误描述,则结束轮询。 */ var taskIdParams = { TaskId : taskId }; var timer = setInterval(() => { client.getTaskResult(taskIdParams).then((response) => { console.log('识别结果查询响应:'); console.log(response); var statusText = response.StatusText; if (statusText == 'RUNNING' || statusText == 'QUEUEING') { // 继续轮询,注意间隔周期。 } else { if (statusText == 'SUCCESS' || statusText == 'SUCCESS_WITH_NO_VALID_FRAGMENT') { console.log('录音文件识别成功:'); var sentences = response.Result; console.log(sentences); } else { console.log('录音文件识别失败!'); } // 退出轮询 clearInterval(timer); } }).catch((error) => { console.error(error); // 异常情况,退出轮询。 clearInterval(timer); }); }, 10000); }).catch((error) => { console.error(error); });} How使用promise进行包裹,等到promise内部的函数取到了结果在返回 12345678if (statusText == 'SUCCESS' || statusText == 'SUCCESS_WITH_NO_VALID_FRAGMENT') { console.log('录音文件识别成功:'); var sentences = response.Result; console.log(sentences); //这里新增resolve} else { console.log('录音文件识别失败!');} 外层通过如下代码实现 1234var promise = new Promise(function(resolve, reject) { // 异步处理 // 处理结束后、调用resolve 或 reject}); 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465async function getWords() { return new Promise((resolve, reject) => { client .submitTask(taskParams, options) .then(response => { console.log(response); // 服务端响应信息的状态描述StatusText。 const statusText = response.StatusText; if (statusText != 'SUCCESS') { console.log('录音文件识别请求响应失败!'); } console.log('录音文件识别请求响应成功!'); // 获取录音文件识别请求任务的TaskId,以供识别结果查询使用。 const taskId = response.TaskId; /** * 以TaskId为查询参数,提交识别结果查询请求。 * 以轮询的方式进行识别结果的查询,直到服务端返回的状态描述为"SUCCESS"、SUCCESS_WITH_NO_VALID_FRAGMENT, * 或者为错误描述,则结束轮询。 */ const taskIdParams = { TaskId: taskId, }; const timer = setInterval(() => { client .getTaskResult(taskIdParams) .then(response => { console.log('识别结果查询响应:'); console.log(response); const statusText = response.StatusText; if (statusText == 'RUNNING' || statusText == 'QUEUEING') { // 继续轮询,注意间隔周期。 } else { if ( statusText == 'SUCCESS' || statusText == 'SUCCESS_WITH_NO_VALID_FRAGMENT' ) { console.log('录音文件识别成功:'); let sentences = ''; for (const s of response.Result.Sentences) { sentences += s.Text; } console.log(response.Result); resolve(sentences);//**重点**// // return sentences; } else { console.log('录音文件识别失败!'); } // 退出轮询 clearInterval(timer); } }) .catch(error => { console.error(error); // 异常情况,退出轮询。 clearInterval(timer); }); }, 10000); }) .catch(error => { console.error(error); }); }); } return await getWords();//返回前端,翻译结果 另外记录一件事情,左侧单元图标地址:https://fontawesome.com/v4.7.0/icons/","raw":null,"content":null,"categories":[],"tags":[]},{"title":"NodeJS","slug":"2020-09-27-NodeJS","date":"2020-09-27T10:42:36.000Z","updated":"2021-07-27T07:09:41.898Z","comments":true,"path":"2020/09/27/2020-09-27-NodeJS/","link":"","permalink":"http://zehai.info/2020/09/27/2020-09-27-NodeJS/","excerpt":"","text":"https://github.com/theanarkh/understand-nodejs 文档还是不错的","raw":null,"content":null,"categories":[],"tags":[]},{"title":"Jupyter","slug":"2020-09-25-Jupyter","date":"2020-09-25T07:48:37.000Z","updated":"2021-07-27T07:09:41.898Z","comments":true,"path":"2020/09/25/2020-09-25-Jupyter/","link":"","permalink":"http://zehai.info/2020/09/25/2020-09-25-Jupyter/","excerpt":"","text":"whatJupiter = Julia + Python + R Jupyter notebook(http://jupyter.org/) 是一种 Web 应用,能让用户将说明文本、数学方程、代码和可视化内容全部组合到一个易于共享的文档中。 why 将代码和文档结合在一起,更直观的编写人工智能,大数据的代码 分块运行 直接运行shell不需要切换环境 so on how Download images 12$docker pull jupyter jupyter/scipy-notebook:latest$docker run -itd --rm -p 1000:8888 -e JUPYTER_ENABLE_LAB=yes -v /home/zehai/jupyter:/home/jovyan/work --name jupyter jupyter/scipy-notebook:latest docker logs -f container’s ID and find token 12345To access the notebook, open this file in a browser: file:///home/jovyan/.local/share/jupyter/runtime/nbserver-6-open.htmlOr copy and paste one of these URLs: http://896bb1e66101:8888/?token=fda8565a9b5cd5b8c621b45322ee72f716fd7ddea089fb51 or http://127.0.0.1:8888/?token=fda8565a9b5cd5b8c621b45322ee72f716fd7ddea089fb51 more info visit official docs enjoy (pics powered by cherevero)","raw":null,"content":null,"categories":[],"tags":[]},{"title":"chevereto","slug":"2020-09-15-Chevereto","date":"2020-09-15T02:16:33.000Z","updated":"2021-07-27T07:09:41.898Z","comments":true,"path":"2020/09/15/2020-09-15-Chevereto/","link":"","permalink":"http://zehai.info/2020/09/15/2020-09-15-Chevereto/","excerpt":"","text":"whatTo solve some problems some web only use markdown and can’t upload pictures,such as v2ex.com some pics you don’t want to give it to others for long time,such as your interesting story give your blog’s can speed when download bigger pics and so on Picture Bed can offer you a excellent platform to share your pictures and protect them, however it has a problem that you need a server to run the service,even though you can use 七牛云,alioss,weibo for free. whyChevereto is aim what I find dockerhub has chevereto images Combined with ShareX (only for windows😢),chevereto can write markdown essay easily it has api ,you can make it stronger Chevereto Free v1.2.2 now Something others you can discover by yourself howChevereto is a php project , I use docker to run it 12345678910docker pull nmtan/chevereto:latest//use docker-compose.yml(next block)// or docker run docker run -it --name chevereto -d -p 8000:80 -v "/home/xxx/images":/var/www/html/images -e "CHEVERETO_DB_HOST=127.0.0.1" -e "CHEVERETO_DB_USERNAME=root" -e "CHEVERETO_DB_PASSWORD=rootpass" -e "CHEVERETO_DB_NAME=chevereto" -e "CHEVERETO_DB_PREFIX=chv_" nmtan/chevereto//-v save photos in server instead of container//-e mysql:5.7.31 host,username,password,db_name(db must exist first)//open chrome and input 127.0.0.1:8000 12345678910111213141516171819202122232425262728293031323334353637383940414243//this is docker-compose.ymlversion: '3'services: db: image: mariadb volumes: - ./database:/var/lib/mysql:rw restart: always networks: - private environment: MYSQL_ROOT_PASSWORD: xxxxx MYSQL_DATABASE: xxxxx MYSQL_USER: xxxxx MYSQL_PASSWORD: xxxxx chevereto: depends_on: - db image: nmtan/chevereto restart: always networks: - private environment: CHEVERETO_DB_HOST: db CHEVERETO_DB_USERNAME: xxxxxx CHEVERETO_DB_PASSWORD: xxxxx CHEVERETO_DB_NAME: xxxxx CHEVERETO_DB_PREFIX: chv_ volumes: - ./images:/var/www/html/images:rw - ./php.ini:/usr/local/etc/php/php.ini:ro ports: - 8080:80networks: private:// start commandnohup docker-compose up &> run.log &disown You maybe run into a stone wall when you first visit 127.0.0.1:8000 Before you can use docker exec -it chevereto bash into container /var/www/html no permission write phots to /home/xxx/images,you can use chmod -R 777 /home/xxx/images no permission update chevereto from 1.1.4 to1.2.2 ,no update possible: /app/install/update/temp/ path,that is no temp folder in /app/install/update/ under version 1.2.0,you can mkdir temp and then chmod -R 777 ./temp and then refresh the webpage ,the prics bed will update successfully So , you can use ip address to visit your chevereto . However , we usually use domain name such as example.com to visit web, a https isn ecessary as well 1.Use aliyun to apply a free ssl license for a domain name such as pics.example.com 2.Download pem and keys to your server and put it in nginx conf folder 3.Use the conf as follows 12345678910111213141516171819202122232425262728293031323334server { listen 80; server_name pics.example.com; return 301 https://pics.example.com$request_uri; } server { listen 443 ssl; server_name pics.example.com; gzip on; ssl_certificate cert/xxxxxx9_pics.example.com.pem; # pem's filename ssl_certificate_key cert/xxxxxx9_pics.example.com.key;# key's filename location / { proxy_redirect off; proxy_pass http://dockername; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; client_max_body_size 100m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } And then you can visit https://pics.example.com That‘s my story that building pics bed ,and hope to help you. 2020-09-28 append use picgo,upload pictures in typora to chevereto GitHub download picgo,mac use .dmp,and then install it open 插件设置,search chevereto and install chevereto 1.0.0 Open 图床设置>Chevereto Uploader and put in params, Url is your upload service ip/domain Key is chevereto api in Dashboard>Settings>API>API v1 key param is not in use now 12Url:https://example.com/api/1/uploadKey:xxx Click 确定 and 设为默认图库 make sure server is on PicGo设置>设置Server>点击设置,if it is on ,noting should be done and then we modify config in typaro Open Typora and open ‘preferences>Images` Choose Upload images in when Insert and check apply above rules to local images and apply above rules to online images in option, and I suggest you check both of them to approve all pics managed by chevereto Choose PicGo.app in images Uploader and click Test Uploader to test your upload pictures automatically for more information ,youcan visit PicGo and PicGo-Core Upload your pictures in personal album instead of visitors’ albulm Chevereto API thanks Chevereto ShareX ioiox’s blog dana5haw’s blog","raw":null,"content":null,"categories":[{"name":"Chevereto","slug":"Chevereto","permalink":"http://zehai.info/categories/Chevereto/"}],"tags":[{"name":"pictureBed","slug":"pictureBed","permalink":"http://zehai.info/tags/pictureBed/"}]},{"title":"RabbitMQ","slug":"2020-09-14-RabbitMQ","date":"2020-09-14T04:13:40.000Z","updated":"2021-07-27T07:09:41.897Z","comments":true,"path":"2020/09/14/2020-09-14-RabbitMQ/","link":"","permalink":"http://zehai.info/2020/09/14/2020-09-14-RabbitMQ/","excerpt":"","text":"whatMQ-message queue 三足鼎立 rocketmq -Made by Java 吞吐量高一些 阿里中间件 rabbitmq -Made by Erlang Kafka- 以后有更多的了解再补充性能/功能差距 why功能:解耦(双方通过mq交互)、异步、削峰 应用: 阿里双11 问题: 处理好新增的复杂性 处理好系统可用性 how之所以选择rabbitmq是因为rocketmq的nameserver所需要的内存太大了,更何况boker,对于1C2G的乞丐机器来说根本跑不起来 1.docker runBecause of rocketmq need more than 123456789docker pull rabbitmq:managementdocker run -dit --name rabbitmq -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin -p 15672:15672 -p 5672:5672 rabbitmq:management--name containername-e RABBITMQ_DEFAULT_USER 参数用户名,密码同理-p 端口映射,主机:容器,15672-UI,5672-servicerabbitmq:management image's name 2.Usage1.open chrome and input ‘localhost:15672’ or ‘192.168.1.1:15672’ then you can touch rabbitmq UI Overview–the queued msg, msg rate in your server, some global counts, your nodes stats (if u use the above method,you only see one node in the screen ),you also can build a cluster with more nodes Connections– Channels– Exchanges–direct,fanout,headers,match,trace,topic Queses– Admin–users management with passport && permission 2.use 5672 in your code 12345678amqp.connect({ protocol: 'amqp', hostname: 'example.com',//localhost port: '5672', username: 'admin', password: 'xxx', vhost: '/',//important }) more in official docs–> I’m doc or some blogs–>I’m blog or my GitHub–>click here","raw":null,"content":null,"categories":[{"name":"MQ","slug":"MQ","permalink":"http://zehai.info/categories/MQ/"}],"tags":[{"name":"basic","slug":"basic","permalink":"http://zehai.info/tags/basic/"}]},{"title":"EventLoop Source","slug":"2020-07-23-EventLoop2","date":"2020-07-24T03:49:24.000Z","updated":"2021-07-27T07:09:41.897Z","comments":true,"path":"2020/07/24/2020-07-23-EventLoop2/","link":"","permalink":"http://zehai.info/2020/07/24/2020-07-23-EventLoop2/","excerpt":"","text":"eventLoop之前也有过章节node整理Node.js 有看到石墨技术文档 cnode技术文档,作者:youth7 记录以下知识点: nodejs的event是基于libuv,浏览器的event loop则在html5的规范中明确定义,两个事物有明显的区别 process.nextTick()在6个阶段结束的时候都会执行 eventLoop timers 执行setTimeout() 和 setInterval()中到期的callback I/O callbacks 上一轮循环中有少数的I/Ocallback会被延迟到这一轮的这一阶段执行 idle, prepare 仅内部使用 poll 最为重要的阶段,执行I/O callback,在适当的条件下会阻塞在这个阶段 check 执行setImmediate的callback close callbacks 执行close事件的callback,例如socket.on("close",func) 123456789101112131415161718 ┌───────────────────────┐┌─>│ timers ││ └──────────┬────────────┘│ ┌──────────┴────────────┐│ │ I/O callbacks ││ └──────────┬────────────┘│ ┌──────────┴────────────┐│ │ idle, prepare ││ └──────────┬────────────┘ ┌───────────────┐│ ┌──────────┴────────────┐ │ incoming: ││ │ poll │<─────┤ connections, ││ └──────────┬────────────┘ │ data, etc. ││ ┌──────────┴────────────┐ └───────────────┘│ │ check ││ └──────────┬────────────┘│ ┌──────────┴────────────┐└──┤ close callbacks │ └───────────────────────┘ 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657# /deps/uv/src/unix/core.cint uv_run(uv_loop_t* loop, uv_run_mode mode) { int timeout; int r; int ran_pending; r = uv__loop_alive(loop); // if(uv_has_active_hanles||uv_has_active_reqs || lop->closing_handles!=NULL)retrun true if (!r) uv__update_time(loop); while (r != 0 && loop->stop_flag == 0) { uv__update_time(loop); // main uv__run_timers(loop);//timer phase ran_pending = uv__run_pending(loop);//IO callback pharse uv__run_idle(loop);//idle phase uv__run_prepare(loop);// prepare phase // main end timeout = 0; if ((mode == UV_RUN_ONCE && !ran_pending) || mode == UV_RUN_DEFAULT) timeout = uv_backend_timeout(loop); uv__io_poll(loop, timeout);//poll phase uv__run_check(loop);//check phase uv__run_closing_handles(loop);//closing pharse if (mode == UV_RUN_ONCE) { /* UV_RUN_ONCE implies forward progress: at least one callback must have * been invoked when it returns. uv__io_poll() can return without doing * I/O (meaning: no callbacks) when its timeout expires - which means we * have pending timers that satisfy the forward progress constraint. * * UV_RUN_NOWAIT makes no guarantees about progress so it's omitted from * the check. */ // UV_RUN_ONCE 至少有一个回调执行,不然该循环就空转了,满足前进要求 // 这也是[文章](https://zehai.info/2020/04/10/2020-04-10-eventloop/)中写到: // poll为空,eventloop将检查timer是否有快到的,如果需要执行,eventloop将要进入timers阶段来顺序执行timer callback uv__update_time(loop); uv__run_timers(loop); } r = uv__loop_alive(loop); if (mode == UV_RUN_ONCE || mode == UV_RUN_NOWAIT) break; } /* The if statement lets gcc compile it to a conditional store. Avoids * dirtying a cache line. */ if (loop->stop_flag != 0) loop->stop_flag = 0; return r;} timers phase执行setTimeout() 和 setInterval()中到期的callback 123456789101112131415161718192021void uv__run_timers(uv_loop_t* loop) { struct heap_node* heap_node; uv_timer_t* handle; for (;;) { heap_node = heap_min(timer_heap(loop)); if (heap_node == NULL) break; // 取出堆中最快要被执行的timer // #define container_of(ptr, type, member) // ((type *) ((char *) (ptr) - offsetof(type, member))) // 没看懂 handle是怎么生成的 handle = container_of(heap_node, uv_timer_t, heap_node); if (handle->timeout > loop->time)//执行时间大于eventloop循环一次时间,退出phase下次再说 break; uv_timer_stop(handle);// remove handle uv_timer_again(handle);// 多次重复的timer再塞进去 handle->timer_cb(handle);// invoke callback }} I/O callbacks上一轮循环中有少数的I/Ocallback会被延迟到这一轮的这一阶段执行 123456789101112131415161718192021//deps/uv/src/unix/core.cstatic int uv__run_pending(uv_loop_t* loop) { QUEUE* q; QUEUE pq; uv__io_t* w; if (QUEUE_EMPTY(&loop->pending_queue))//isEmpty return 0; QUEUE_MOVE(&loop->pending_queue, &pq);//move while (!QUEUE_EMPTY(&pq)) { q = QUEUE_HEAD(&pq);//find QUEUE_REMOVE(q);//pop QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, pending_queue); w->cb(loop, w, POLLOUT);//unitl queue empty } return 1;} Idle and prepare phase/ loop / void uv__run_idle(uv_loop_t* loop); void uv__run_check(uv_loop_t* loop); void uv__run_prepare(uv_loop_t* loop); 12345678910111213void uv__run_##name(uv_loop_t* loop) { uv_##name##_t* h; QUEUE queue; QUEUE* q; QUEUE_MOVE(&loop->name##_handles, &queue);//QUEUE_MOVE while (!QUEUE_EMPTY(&queue)) {//util empty q = QUEUE_HEAD(&queue);//pop h = QUEUE_DATA(q, uv_##name##_t, queue);//element->handle QUEUE_REMOVE(q);//remove QUEUE_INSERT_TAIL(&loop->name##_handles, q);//insert tail h->name##_cb(h);//callback }} !!!poll phase!!!最为重要的阶段,执行I/O callback,在适当的条件下会阻塞在这个阶段 可见poll阶段的任务就是阻塞等待监听的事件来临,然后执行对应的callback,其中阻塞是带有超时时间的,以下几种情况都会使得超时时间为0 uv_run处于UV_RUN_NOWAIT模式下 uv_stop()被调用 没有活跃的handles和request 有活跃的idle handles 有等待关闭的handles 如果上述都不符合,则超时时间为距离现在最近的timer;如果没有timer则poll阶段会一直阻塞下去 个人理解nodejs的服务,大部分时间会被阻塞在这个阶段,而不去执行closing 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195// 不行了,看不懂了void uv__io_poll(uv_loop_t* loop, int timeout) { struct pollfd events[1024]; struct pollfd pqry; struct pollfd* pe; struct poll_ctl pc; QUEUE* q; uv__io_t* w; uint64_t base; uint64_t diff; int have_signals; int nevents; int count; int nfds; int i; int rc; int add_failed; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } while (!QUEUE_EMPTY(&loop->watcher_queue)) {//until watcher queue empty q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); assert(w->fd >= 0); assert(w->fd < (int) loop->nwatchers); pc.events = w->pevents; pc.fd = w->fd; add_failed = 0; if (w->events == 0) { pc.cmd = PS_ADD; if (pollset_ctl(loop->backend_fd, &pc, 1)) { if (errno != EINVAL) { assert(0 && "Failed to add file descriptor (pc.fd) to pollset"); abort(); } /* Check if the fd is already in the pollset */ pqry.fd = pc.fd; rc = pollset_query(loop->backend_fd, &pqry); switch (rc) { case -1: assert(0 && "Failed to query pollset for file descriptor"); abort(); case 0: assert(0 && "Pollset does not contain file descriptor"); abort(); } /* If we got here then the pollset already contained the file descriptor even though * we didn't think it should. This probably shouldn't happen, but we can continue. */ add_failed = 1; } } if (w->events != 0 || add_failed) { /* Modify, potentially removing events -- need to delete then add. * Could maybe mod if we knew for sure no events are removed, but * content of w->events is handled above as not reliable (falls back) * so may require a pollset_query() which would have to be pretty cheap * compared to a PS_DELETE to be worth optimizing. Alternatively, could * lazily remove events, squelching them in the mean time. */ pc.cmd = PS_DELETE; if (pollset_ctl(loop->backend_fd, &pc, 1)) { assert(0 && "Failed to delete file descriptor (pc.fd) from pollset"); abort(); } pc.cmd = PS_ADD; if (pollset_ctl(loop->backend_fd, &pc, 1)) { assert(0 && "Failed to add file descriptor (pc.fd) to pollset"); abort(); } } w->events = w->pevents; } assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ for (;;) { nfds = pollset_poll(loop->backend_fd, events, ARRAY_SIZE(events), timeout); /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ SAVE_ERRNO(uv__update_time(loop)); if (nfds == 0) { assert(timeout != -1); return; } if (nfds == -1) { if (errno != EINTR) { abort(); } if (timeout == -1) continue; if (timeout == 0) return; /* Interrupted by a signal. Update timeout and poll again. */ goto update_timeout; } have_signals = 0; nevents = 0; assert(loop->watchers != NULL); loop->watchers[loop->nwatchers] = (void*) events; loop->watchers[loop->nwatchers + 1] = (void*) (uintptr_t) nfds; for (i = 0; i < nfds; i++) { pe = events + i; pc.cmd = PS_DELETE; pc.fd = pe->fd; /* Skip invalidated events, see uv__platform_invalidate_fd */ if (pc.fd == -1) continue; assert(pc.fd >= 0); assert((unsigned) pc.fd < loop->nwatchers); w = loop->watchers[pc.fd]; if (w == NULL) { /* File descriptor that we've stopped watching, disarm it. * * Ignore all errors because we may be racing with another thread * when the file descriptor is closed. */ pollset_ctl(loop->backend_fd, &pc, 1); continue; } /* Run signal watchers last. This also affects child process watchers * because those are implemented in terms of signal watchers. */ if (w == &loop->signal_io_watcher) have_signals = 1; else w->cb(loop, w, pe->revents); nevents++; } if (have_signals != 0) loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); loop->watchers[loop->nwatchers] = NULL; loop->watchers[loop->nwatchers + 1] = NULL; if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) { if (nfds == ARRAY_SIZE(events) && --count != 0) { /* Poll for more events but don't block this time. */ timeout = 0; continue; } return; } if (timeout == 0) return; if (timeout == -1) continue;update_timeout: assert(timeout > 0); diff = loop->time - base; if (diff >= (uint64_t) timeout) return; timeout -= diff; }} check phase见idle prepare close关闭handle 12345678910111213static void uv__run_closing_handles(uv_loop_t* loop) { uv_handle_t* p; uv_handle_t* q; p = loop->closing_handles; loop->closing_handles = NULL; while (p) { q = p->next_closing; uv__finish_close(p); p = q; }} where is process.nextTick12345678910111213141516171819202122232425262728293031323334353637383940//lib/internal/process/task_queues.js// `nextTick()` will not enqueue any callback when the process is about to// exit since the callback would not have a chance to be executed.// 意思就是nextTick在进程快要结束时不会排队callback,因为没有机会执行// 你们看引用的文档吧,我看不下去了😭// 主要的思路是JS执行process.nexTick(),然后将callback交给c++执行function nextTick(callback) { if (typeof callback !== 'function') throw new ERR_INVALID_CALLBACK(callback); if (process._exiting) return; let args; switch (arguments.length) { case 1: break; case 2: args = [arguments[1]]; break; case 3: args = [arguments[1], arguments[2]]; break; case 4: args = [arguments[1], arguments[2], arguments[3]]; break; default: args = new Array(arguments.length - 1); for (let i = 1; i < arguments.length; i++) args[i - 1] = arguments[i]; } if (queue.isEmpty()) setHasTickScheduled(true); const asyncId = newAsyncId(); const triggerAsyncId = getDefaultTriggerAsyncId(); const tickObject = { [async_id_symbol]: asyncId, [trigger_async_id_symbol]: triggerAsyncId, callback, args }; if (initHooksExist()) emitInit(asyncId, 'TickObject', triggerAsyncId, tickObject); queue.push(tickObject);//封装callback push //进入c} question1.setTimeout vs setImmediate phase执行顺序 expire设置0是不是立刻执行 1234567setTimeout(() => { console.log('setTimeout')}, 0)setImmediate(() => { console.log('setImmediate')}) setTimeout/setInterval 的第二个参数取值范围是:[1, 2^31 - 1],如果超过这个范围则会初始化为 1,即 setTimeout(fn, 0) === setTimeout(fn, 1)。 setTimeout 的回调函数在 timer 阶段执行,setImmediate 的回调函数在 check 阶段执行,event loop 的开始会先检查 timer 阶段,但是在开始之前到 timer 阶段会消耗一定时间,所以就会出现两种情况: timer 前的准备时间超过 1ms,满足 loop->time >= 1,则执行 timer 阶段(setTimeout)的回调函数 timer 前的准备时间小于 1ms,则先执行 check 阶段(setImmediate)的回调函数,下一次 event loop 执行 timer 阶段(setTimeout)的回调函数 在举例: 12345678910setTimeout(() => { console.log('setTimeout')}, 0)setImmediate(() => { console.log('setImmediate')})const start = Date.now()while (Date.now() - start < 10);//准备时间超过1ms,则直接执行timer 2.setTimeout vs setImmediate 212345678910111213const fs = require('fs')fs.readFile(__filename, () => { setTimeout(() => { console.log('setTimeout') }, 0) setImmediate(() => { console.log('setImmediate') })})//setImmediate//setTimeout 在引用一下官方对于check phase的介绍 This phase allows a person to execute callbacks immediately after the poll phase has completed. If the poll phase becomes idle and scripts have been queued with setImmediate(), the event loop may continue to the check phase rather than waiting. setImmediate() is actually a special timer that runs in a separate phase of the event loop. It uses a libuv API that schedules callbacks to execute after the poll phase has completed. Generally, as the code is executed, the event loop will eventually hit the poll phase where it will wait for an incoming connection, request, etc. However, if a callback has been scheduled with setImmediate() and the poll phase becomes idle, it will end and continue to the check phase rather than waiting for poll events. fs.readFile 的回调函数执行完后: 注册 setTimeout 的回调函数到 timer 阶段 注册 setImmediate 的回调函数到 check 阶段 event loop 从 pool 阶段出来继续往下一个阶段执行,恰好是 check 阶段,所以 setImmediate 的回调函数先执行 本次 event loop 结束后,进入下一次 event loop,执行 setTimeout 的回调函数 所以,在 I/O Callbacks 中注册的 setTimeout 和 setImmediate,永远都是 setImmediate 先执行。 3.process.nextTick()123456789101112setInterval(() => { console.log('setInterval')}, 100)process.nextTick(function tick () { process.nextTick(tick)})//notesetImmediate(function immediate () { console.log('111');//会直接打印出很多次111 setImmediate(immediate)}) 运行结果:setInterval 永远不会打印出来。 //这个在node官方文档也有相关的描述 //我在这里也进行了笔记记录 //允许用户处理errors,清理不需要的资源,事件循环前 尝试重新连接 //有时有必要在eventloop继续之前,在call stack unwound之后,让callback执行 解释:process.nextTick 会无限循环,将 event loop 阻塞在 microtask 阶段,导致 event loop 上其他 macrotask 阶段的回调函数没有机会执行。//这段解释是前端的,后端是没有microtask的实际队列的 解决方法通常是用 setImmediate 替代 process.nextTick,如下: 1234567setInterval(() => { console.log('setInterval')}, 100)setImmediate(function immediate () { setImmediate(immediate)}) 运行结果:每 100ms 打印一次 setInterval。 解释:process.nextTick 内执行 process.nextTick 仍然将 tick 函数注册到当前 microtask 的尾部,所以导致 microtask 永远执行不完; setImmediate 内执行 setImmediate 会将 immediate 函数注册到下一次 event loop 的 check 阶段,而不是当前正在执行的 check 阶段,所以给了 event loop 上其他 macrotask 执行的机会。 再看个例子: 12345678910111213setImmediate(() => { console.log('setImmediate1') setImmediate(() => { console.log('setImmediate2') }) process.nextTick(() => { console.log('nextTick') })})setImmediate(() => { console.log('setImmediate3')}) 运行结果: 1234setImmediate1setImmediate3nextTicksetImmediate2 注意:并不是说 setImmediate 可以完全替代 process.nextTick,process.nextTick 在特定场景下还是无法被替代的,比如我们就想将一些操作放到最近的 microtask 里执行。 4.promise12345const promise = Promise.resolve() .then(() => { return promise })promise.catch(console.error) 运行结果: 123456TypeError: Chaining cycle detected for promise #<Promise> at <anonymous> at process._tickCallback (internal/process/next_tick.js:188:7) at Function.Module.runMain (module.js:667:11) at startup (bootstrap_node.js:187:16) at bootstrap_node.js:607:3 解释:promise.then 类似于 process.nextTick,都会将回调函数注册到 microtask 阶段。上面代码会导致死循环,类似前面提到的: 123process.nextTick(function tick () { process.nextTick(tick)}) 再看个例子: 123456789const promise = Promise.resolve()promise.then(() => { console.log('promise')})process.nextTick(() => { console.log('nextTick')}) 运行结果: 12nextTickpromise 解释:promise.then 虽然和 process.nextTick 一样,都将回调函数注册到 microtask,但优先级不一样。process.nextTick 的 microtask queue 总是优先于 promise 的 microtask queue 执行。 5.promise执行顺序1234567891011121314setTimeout(() => { console.log(1)}, 0)new Promise((resolve, reject) => { console.log(2) for (let i = 0; i < 10000; i++) { i === 9999 && resolve() } console.log(3)}).then(() => { console.log(4)})console.log(5) 运行结果: 1234523541 解释:Promise 构造函数是同步执行的,所以先打印 2、3,然后打印 5,接下来 event loop 进入执行 microtask 阶段,执行 promise.then 的回调函数打印出 4,然后执行下一个 macrotask,恰好是 timer 阶段的 setTimeout 的回调函数,打印出 1。 6.综合12345678910111213141516171819202122232425setImmediate(() => { console.log(1) setTimeout(() => { console.log(2) }, 100) setImmediate(() => { console.log(3) }) process.nextTick(() => { console.log(4) })})process.nextTick(() => { console.log(5) setTimeout(() => { console.log(6) }, 100) setImmediate(() => { console.log(7) }) process.nextTick(() => { console.log(8) })})console.log(9) 运行结果: 123456789958174362 process.nextTick、setTimeout 和 setImmediate 的组合,请读者自己推理吧。 other source codesetTimeout()12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849//lib/timers/promises.js//setTimeout(function(){},expire)function setTimeout(after, value, options = {}) { const args = value !== undefined ? [value] : value; if (options == null || typeof options !== 'object') { return PromiseReject( new ERR_INVALID_ARG_TYPE( 'options', 'Object', options)); } const { signal, ref = true } = options; if (signal !== undefined && (signal === null || typeof signal !== 'object' || !('aborted' in signal))) { return PromiseReject( new ERR_INVALID_ARG_TYPE( 'options.signal', 'AbortSignal', signal)); } if (typeof ref !== 'boolean') { return PromiseReject( new ERR_INVALID_ARG_TYPE( 'options.ref', 'boolean', ref)); } // TODO(@jasnell): If a decision is made that this cannot be backported // to 12.x, then this can be converted to use optional chaining to // simplify the check. if (signal && signal.aborted) return PromiseReject(lazyDOMException('AbortError')); return new Promise((resolve, reject) => { const timeout = new Timeout(resolve, after, args, false, true); if (!ref) timeout.unref(); insert(timeout, timeout._idleTimeout); if (signal) { signal.addEventListener('abort', () => { if (!timeout._destroyed) { // eslint-disable-next-line no-undef clearTimeout(timeout); reject(lazyDOMException('AbortError')); } }, { once: true }); } });}","raw":null,"content":null,"categories":[{"name":"Node","slug":"Node","permalink":"http://zehai.info/categories/Node/"}],"tags":[{"name":"source","slug":"source","permalink":"http://zehai.info/tags/source/"}]},{"title":"算法图解","slug":"2020-06-08-算法图解","date":"2020-06-08T03:34:04.000Z","updated":"2021-07-27T07:09:41.897Z","comments":true,"path":"2020/06/08/2020-06-08-算法图解/","link":"","permalink":"http://zehai.info/2020/06/08/2020-06-08-%E7%AE%97%E6%B3%95%E5%9B%BE%E8%A7%A3/","excerpt":"","text":"1. 算法简介1.1 二分Why: 复杂度O(n)—>O(logn) 使用限制:有序数组 1.2 大O表示指出算法运行时间的增速,算法需要做的就是把O(n^2)优化到O(n)等 2. 选择排序2.1 数组 链表数组:连续物理空间,可随机访问,增删数据复杂度高 链表:分散无力空间,不可随机访问(只能顺序),增删数据复杂度低 数组 链表 读改 O(1) O(n) 增删 O(n) O(1) 根据互相特性,选择合适的方式,如频繁增删用链表,反之用数组 2.2 选择排序复杂度:O(n^2) 遍历n 个元素选择 最小/大的,遍历n-1个元素选择 最小/大的 3. 递归类比:套娃 :call_me_hand: 性能和易读不可兼得 避免死循环! 尾递归可以解决部分性能问题 递归调用栈是性能降低的原因,遵循FIFO 4. 快排核心:分而治之divide and conquer,快排只是其中的一个应用 思想:递归的一种应用 快排(递归)是一种函数式编程 快排通过基准值(可以选第一个元素)进行分而治之 5. 散列表实现方式:数组,非链表,检索值key类似数组的下表,可直接访问value 应用:DNS,阻止重复数据(类set集),作缓存(服务器端) 复杂度 散列平均 散列最糟 数组 链表 查找 1 n 1 n 插入删除 1 n n 1 装填因子(0.4)=散列元素(4)/位置总数(10) 避免冲突:1.良好的散列函数(均匀分布) 2.较低的装填因子(<0.7) 将满时候:1.申请两倍于原来的 新空间 2.hash所有元素到新空间 冲突解决: 开放地址(最简单就是冲突顺延下一位,直到为空) 拉链发(指在某个位子上再拉一条链表,非👖拉链) 6.BFS广度优先搜索breadth first search,解决无加权最短路径问题之一 应用:国际跳棋,拼写检查,人际关系网络 7. Dijkstra正加权有向无环图的解决算法 最短时间内到达的节点 更新该节点临接节点的开销 重复 计算最终路径 解决环: 负加权:bellman ford algorithm 8. Greedy每步最优–>全局最优,得到近似正确的结果 9.DP列出所有可能 10. K最邻近算法11.next树解决了二分查找中,插入删除O(n)降低到O(log n),但是降低了随机访问能力 树包括:二叉树,平衡二叉树,B树 B+树,,红黑树 反向索引:散列表,用于创建搜索引擎—>应用:傅里叶变换 并行算法,单机并行or分布式,应用:mapreduce,map->映射 ,reduce->归并 布隆过滤器:庞大的散列表(如谷歌的上亿条),通常使用redis实现,是一种概率型数据结构(偶尔出错),使用理由,存储空间少 hyperLogLog:类似布隆,是个日志 SHA算法 散列的一种应用 判断两个(超大)文件是否相同(散列值相同) SHA(用户输入密码)?== 数据库存储的SHA值,且拖库后无法还原密码 SHA是一系列算法的统称,包括SHA-0 ,SHA-1 SHA-2 SHA-3 bcrypt etc SHA全局敏感(改动局部,整体全变),SIMhash局部敏感(局部改变,散列值局部改变),后者用于判断网页是否已经搜集,作业是否抄袭,相似度查询 diffie-hellman密钥交换 双方无需知道加密算法,破解难度大 公钥与私钥,client获取公钥后,1.使用公钥加密 2.服务器端使用私钥解密 线性规划:simplex算法","raw":null,"content":null,"categories":[{"name":"Books","slug":"Books","permalink":"http://zehai.info/categories/Books/"}],"tags":[{"name":"算法图解","slug":"算法图解","permalink":"http://zehai.info/tags/%E7%AE%97%E6%B3%95%E5%9B%BE%E8%A7%A3/"}]},{"title":"2020-05-25-FirstUniqueCharacterInAString","slug":"2020-05-25-FirstUniqueCharacterInAString","date":"2020-05-30T13:15:37.000Z","updated":"2021-07-27T07:09:41.896Z","comments":true,"path":"2020/05/30/2020-05-25-FirstUniqueCharacterInAString/","link":"","permalink":"http://zehai.info/2020/05/30/2020-05-25-FirstUniqueCharacterInAString/","excerpt":"","text":"Leetcode-6412345678910Given a string, find the first non-repeating character in it and return it's index. If it doesn't exist, return -1.Examples:s = "leetcode"return 0.s = "loveleetcode",return 2.Note: You may assume the string contain only lowercase letters. solution12345678910111213141516/** * @param {string} s * @return {number} */var firstUniqChar = function(s) { for(var i=0;i<s.length;i++){ var flag = false; for(var j=0;j<s.length;j++){ if(i==j)continue; if(s[i]==s[j])flag=true; } if(!flag)return i; } return -1;}; 123456789101112131415161718class Solution { public int firstUniqChar(String s) { HashMap<Character, Integer> count = new HashMap<Character, Integer>(); int n = s.length(); // build hash map : character and how often it appears for (int i = 0; i < n; i++) { char c = s.charAt(i); count.put(c, count.getOrDefault(c, 0) + 1); } // find the index for (int i = 0; i < n; i++) { if (count.get(s.charAt(i)) == 1) return i; } return -1; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"monit","slug":"2020-05-13-监控软件","date":"2020-05-13T08:18:55.000Z","updated":"2021-07-27T07:09:41.896Z","comments":true,"path":"2020/05/13/2020-05-13-监控软件/","link":"","permalink":"http://zehai.info/2020/05/13/2020-05-13-%E7%9B%91%E6%8E%A7%E8%BD%AF%E4%BB%B6/","excerpt":"","text":"监控 后端prometheus vs zabbix 先对两者的各自特点进行一下对比: Zabbix Prometheus 后端用 C 开发,界面用 PHP 开发,定制化难度很高。 后端用 golang 开发,前端是 Grafana,JSON 编辑即可解决。定制化难度较低。 集群规模上限为 10000 个节点。 支持更大的集群规模,速度也更快。 更适合监控物理机环境。 更适合云环境的监控,对 OpenStack,Kubernetes 有更好的集成。 监控数据存储在关系型数据库内,如 MySQL,很难从现有数据中扩展维度。 监控数据存储在基于时间序列(TSDB)的数据库内,便于对已有数据进行新的聚合。 安装简单,zabbix-server 一个软件包中包括了所有的服务端功能。 安装相对复杂,监控、告警和界面都分属于不同的组件。 图形化界面比较成熟,界面上基本上能完成全部的配置操作。 界面相对较弱,很多配置需要修改配置文件。 发展时间更长,对于很多监控场景,都有现成的解决方案。 2015 年后开始快速发展,但发展时间较短,成熟度不及 Zabbix。 由于最后敲定了Prometheus方案,对于zabbix就云评测了,欢迎指正 虽然图形化界面弱化,很多配置走yml文件,但图形化界面真的没有必要 时序数据库,高并发下好于mysql(不然干嘛开发tsdb应对监控场景) prom支持pull和push模型,可以支持k8s,swarm等服务发现 前端Performance?webVitals?以后用到再补充 主要关注性能,pv,redirect,err等问题 页面是否可用阿里云-云监控控制台 可提供网址监控,包括cookie, headers 等自定义的简单配置,进行电话,邮件,短信,旺旺等报警","raw":null,"content":null,"categories":[{"name":"monit","slug":"monit","permalink":"http://zehai.info/categories/monit/"}],"tags":[{"name":"intro","slug":"intro","permalink":"http://zehai.info/tags/intro/"}]},{"title":"Node.JSv14","slug":"2020-04-22-Node14","date":"2020-04-22T02:37:29.000Z","updated":"2021-07-27T07:09:41.896Z","comments":true,"path":"2020/04/22/2020-04-22-Node14/","link":"","permalink":"http://zehai.info/2020/04/22/2020-04-22-Node14/","excerpt":"","text":"newmod今天看到Node Current更新了14的版本,看看都有些什么东西 前置了解了一下doc中提到的semver,是一个语义化版本semantic versioning,实现版本和版本规范的解析,计算,比较,用以解决在大型项目中对依赖的版本失去控制的问题,Node.js 的包管理工具 npm 也完全基于 Semantic Versioning 来管理依赖的版本。 参考资料:semver:语义化版本规范在 Node.js 中的实现 deprecationssermver弃用一部分功能 (SEMVER-MAJOR) crypto: move pbkdf2 without digest to EOL (James M Snell) (SEMVER-MAJOR) fs: deprecate closing FileHandle on garbage collection (James M Snell) (SEMVER-MAJOR) http: move OutboundMessage.prototype.flush to EOL (James M Snell) (SEMVER-MAJOR) lib: move GLOBAL and root aliases to EOL (James M Snell) (SEMVER-MAJOR) os: move tmpDir() to EOL (James M Snell) (SEMVER-MAJOR) src: remove deprecated wasm type check (Clemens Backes) (SEMVER-MAJOR) stream: move _writableState.buffer to EOL (James M Snell) (SEMVER-MINOR) doc: deprecate process.mainModule (Antoine du HAMEL) (SEMVER-MINOR) doc: deprecate process.umask() with no arguments (Colin Ihrig) ECMAScript Modules在 v13 中,需要调用 --experimental-modules 来开启 ESM module 支持, 而且还会有警告,但目前已经移除警告(还是需要手动开启)仍在实验中,但是其已经非常完善,移除警告迈向了stable的重要一步 New V8 ArrayBuffer APIv8不再支持多个ArrayBuffer指向相同的base address Toolchain and Compiler Upgrades//没看懂 (SEMVER-MAJOR) build: update macos deployment target to 10.13 for 14.x (AshCripps) #32454 (SEMVER-MAJOR) doc: update cross compiler machine for Linux armv7 (Richard Lau) #32812 (SEMVER-MAJOR) doc: update Centos/RHEL releases use devtoolset-8 (Richard Lau) #32812 (SEMVER-MAJOR) doc: remove SmartOS from official binaries (Richard Lau) #32812 (SEMVER-MAJOR) win: block running on EOL Windows versions (João Reis) #31954 It is expected that there will be an ABI mismatch on ARM between the Node.js binary and native addons. Native addons are only broken if they interact with std::shared_ptr. This is expected to be fixed in a later version of Node.js 14. Update to V8 8.1Others cli, report: move –report-on-fatalerror to stable (Colin Ihrig) deps: upgrade to libuv 1.37.0 (Colin Ihrig) fs: add fs/promises alias module","raw":null,"content":null,"categories":[{"name":"Node","slug":"Node","permalink":"http://zehai.info/categories/Node/"}],"tags":[{"name":"14","slug":"14","permalink":"http://zehai.info/tags/14/"}]},{"title":"LeetCodeWeek2","slug":"2020-04-19-LeetCodeWeek1","date":"2020-04-19T05:23:52.000Z","updated":"2021-07-27T07:09:41.896Z","comments":true,"path":"2020/04/19/2020-04-19-LeetCodeWeek1/","link":"","permalink":"http://zehai.info/2020/04/19/2020-04-19-LeetCodeWeek1/","excerpt":"","text":"Problem Product of Array Except SelfGiven an array nums of n integers where n > 1, return an array output such that output[i] is equal to the product of all the elements of nums except nums[i]. Example: 12Input: [1,2,3,4]Output: [24,12,8,6] Constraint: It’s guaranteed that the product of the elements of any prefix or suffix of the array (including the whole array) fits in a 32 bit integer. Note: Please solve it without division and in O(n). Follow up:Could you solve it with constant space complexity? (The output array does not count as extra space for the purpose of space complexity analysis.) keysolution1234567891011121314151617181920212223242526272829//3msclass Solution { public int[] productExceptSelf(int[] nums) { int sum =1; int hasZero =0; for(int num :nums){ if(num!=0){ sum*=num; }else{ hasZero++; } } for(int i=0;i<nums.length;i++){ if(hasZero>=2){ nums[i]=0; }else if(hasZero==1){ if(nums[i]==0){ nums[i]=sum; }else{ nums[i]=0; } }else{ nums[i]=sum/nums[i]; } } return nums; }} 1234567891011121314151617//1msclass Solution { public int[] productExceptSelf(int[] nums) { int n = nums.length; int[] left = new int[n]; left[0] = 1; for (int i = 1; i < n; i++) { left[i] = left[i-1] * nums[i-1]; } int product = 1; for (int i = n - 1; i >= 0; i--) { left[i] *= product; product *= nums[i]; } return left; }} Problem-678Valid Parenthesis StringMedium Given a string containing only three types of characters: ‘(‘, ‘)’ and ‘*’, write a function to check whether this string is valid. We define the validity of a string by these rules: Any left parenthesis '(' must have a corresponding right parenthesis ')'. Any right parenthesis ')' must have a corresponding left parenthesis '('. Left parenthesis '(' must go before the corresponding right parenthesis ')'. '*' could be treated as a single right parenthesis ')' or a single left parenthesis '(' or an empty string. An empty string is also valid. Example 1: 12Input: "()"Output: True Example 2: 12Input: "(*)"Output: True Example 3: 12Input: "(*))"Output: True Note: The string size will be in the range [1, 100]. keysolution123456789101112131415161718192021222324class Solution { public boolean checkValidString(String s) { if (s.length() == 0) return true; int left = 0;int star=0; char[] c = s.toCharArray(); for (char i : c) { switch (i) { case '(': left++; break; case ')': left--; break; case '*': star++; break; default: break; } } if (left == 0 || left - star == 0 || left + star == 0) return true; return false; }} Brute Force12345678910111213141516171819202122232425262728293031323334 class Solution { boolean ans = false; public boolean checkValidString(String s) { solve(new StringBuilder(s), 0); return ans; } public void solve(StringBuilder sb, int i) { if (i == sb.length()) { ans |= valid(sb); } else if (sb.charAt(i) == '*') { for (char c: "() ".toCharArray()) { sb.setCharAt(i, c); solve(sb, i+1); if (ans) return; } sb.setCharAt(i, '*'); } else solve(sb, i + 1); } public boolean valid(StringBuilder sb) { int bal = 0; for (int i = 0; i < sb.length(); i++) { char c = sb.charAt(i); if (c == '(') bal++; if (c == ')') bal--; if (bal < 0) break; } return bal == 0; }} Dynamic Programming123456789101112131415161718192021222324252627282930313233class Solution { public boolean checkValidString(String s) { int n = s.length(); if (n == 0) return true; boolean[][] dp = new boolean[n][n]; for (int i = 0; i < n; i++) { if (s.charAt(i) == '*') dp[i][i] = true; if (i < n-1 && (s.charAt(i) == '(' || s.charAt(i) == '*') && (s.charAt(i+1) == ')' || s.charAt(i+1) == '*')) { dp[i][i+1] = true; } } for (int size = 2; size < n; size++) { for (int i = 0; i + size < n; i++) { if (s.charAt(i) == '*' && dp[i+1][i+size] == true) { dp[i][i+size] = true; } else if (s.charAt(i) == '(' || s.charAt(i) == '*') { for (int k = i+1; k <= i+size; k++) { if ((s.charAt(k) == ')' || s.charAt(k) == '*') && (k == i+1 || dp[i+1][k-1]) && (k == i+size || dp[k+1][i+size])) { dp[i][i+size] = true; } } } } } return dp[0][n-1]; }} Greedy123456789101112class Solution { public boolean checkValidString(String s) { int lo = 0, hi = 0; for (char c: s.toCharArray()) { lo += c == '(' ? 1 : -1; hi += c != ')' ? 1 : -1; if (hi < 0) break; lo = Math.max(lo, 0); } return lo == 0; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"同一宿主机下docker互相访问","slug":"2020-04-16-同一宿主机下docker互相访问","date":"2020-04-16T10:06:19.000Z","updated":"2021-07-27T07:09:41.895Z","comments":true,"path":"2020/04/16/2020-04-16-同一宿主机下docker互相访问/","link":"","permalink":"http://zehai.info/2020/04/16/2020-04-16-%E5%90%8C%E4%B8%80%E5%AE%BF%E4%B8%BB%E6%9C%BA%E4%B8%8Bdocker%E4%BA%92%E7%9B%B8%E8%AE%BF%E9%97%AE/","excerpt":"","text":"what该文档解决:docker下,altermanager收不到prometheus消息 事因,我在一个宿主机下建立了多个docker容器 node-exporter prometheus grafana alertmanager timonwong/prometheus-webhook-dingtalk 这些服务之间会有一些互相访问,如prometheus可以发送数据给alertmanager来发送报警信息,alertmanager通过规则处理可以发送邮件,发送钉钉等方式告知用户,问题就出在prometheus的yml配置文档中: 1234567891011alerting: alertmanagers: - static_configs: - targets: ['localhost:9002'] ###############修改后:alerting: alertmanagers: - static_configs: - targets: ['10.10.10.10:9002'] 问题出在了prometheus的配置中访问了localhost端口,但这个并不是访问宿主机的9002的端口,而是访问的是docker内部的9002端口 找到问题后,使用了宿主机ip+port的方式进行访问 how查询了资料后,发现解决该问题的方法有: 宿主ip:port访问 容器ip访问 link建立通信网络(单向,不推荐)–link xxx user-defined networks(docker dns server/bridge) 前两种不太推荐,因为如果容器ip更改或者宿主机ip更改就需要更新配置文档,第三种方法不太推荐,run 时候link只是单向的建立连接,第四种docker network create: 1234//创建网络docker network create -d bridge my-bridge-network//run时候加入网络docker run -it --network test-network --network-alias mysql -e MYSQL_ROOT_PASSWORD=123 mysql:5.7","raw":null,"content":null,"categories":[{"name":"Question","slug":"Question","permalink":"http://zehai.info/categories/Question/"}],"tags":[{"name":"整理","slug":"整理","permalink":"http://zehai.info/tags/%E6%95%B4%E7%90%86/"}]},{"title":"node整理","slug":"2020-04-10-eventloop","date":"2020-04-10T08:44:26.000Z","updated":"2021-07-27T07:09:41.895Z","comments":true,"path":"2020/04/10/2020-04-10-eventloop/","link":"","permalink":"http://zehai.info/2020/04/10/2020-04-10-eventloop/","excerpt":"","text":"Whateventloop使得单线程机制的node实现非阻塞I/O的机制,将任务通过libuv分发给线程池后,交由系统内核完成(多线程),完成后内核通知nodejs,将回调放入poll队列执行 启动nodejs时,eventloop初始化,进程会输入很多script,包括: async API calls 定时器 process.nextTick() eventloop有六个队列 timers pending callbacks idle,prepare poll(connections,data,etc) check close callbacks 这些队列被称作phase,每个phase都是一个可以放callback的FIFO队列,当进入一个phase时,队列将执行完phase中的callback或者执行最大数目的callback后将进入另一个phase timers:执行定时器,包括setTimeout,setInerval pending callbacks 执行延迟到下一个循环的I/O callback idle,prepare 处理系统内部 poll:检查新的I/O事件,执行I/O回调,node会适当的在此阻塞 check:setImmediate() close:关闭回调函数,如:socket.on(‘close’,foo()) DetailTimers设定延迟后,timers会在规定的时间执行,但存在情况延迟,如poll phase执行回调,超过了timer设定的时间。因为poll必须完成一个任务后才可以检查最近的定时器,没到时间就执行下一个callback,执行callback期间无法中断 可以得出结论:poll控制着定时器何时执行 另外为了防止poll phase 变成恶汉,libuv 制定了一个依赖于系统的硬性最大值,来停止轮询获取更多事件 pending callbacks该队列在系统错误时执行回调(如TCP err),如TCP socket尝试重连收到了ECONNREFUSED,系统需要这些错误报告,那这个错误报告回调就会放在pending callbacks中等待被执行 poll最重要的阶段,poll主要包含两个功能: 计算阻塞和轮询的IO时间 执行poll 队列里的events 当eventloop进入poll阶段,并没有timers的时候 poll不为空,顺序同步执行任务,直到为空或达到处理数量上限 poll为空:如果有setImmediate(),则进入check phase,反之就在poll等客人 一但poll为空,eventlopp将会检查计时器是否有快到的,如果有需要执行的,eventloop将要进入timers阶段来顺序执行timer callback check这个phase可以在poll执行完成时开始执行setImmediate()回调。他其实是特殊的定时器队列,使用libuv API在poll完成的阶段执行(这也是他存在的原因)。 close callbackssocket.desroy()等执行关闭event时候会进入该phase,否则会被process.nextTick()触发 setImmedate() vs setTimeout()相似却又不同 setImmediate()是poll执行完成后执行的script setTimeout()是定时执行的 执行哪个收到上下文的约束,如果两个都被主模块调用,那么进程性能将会收到约束(影响其他app运行) 1234567891011121314151617181920212223242526272829303132without IOsetTimeout(() => { console.log('timeout');}, 0);setImmediate(() => { console.log('immediate');});//$ node timeout_vs_immediate.jstimeoutimmediate$ node timeout_vs_immediate.jsimmediatetimeoutwith IO// timeout_vs_immediate.jsconst fs = require('fs');fs.readFile(__filename, () => { setTimeout(() => { console.log('timeout'); }, 0); setImmediate(() => { console.log('immediate'); });});//immediatetimeout setImmediate()好处在于,如果有IO时会比setTimeout先执行 process.nextTick()它是个异步API,并没有出现在六个phase中,他并不属于eventloop的一部分,当操作完成后处理nextTickQueue而不管eventloop执行到哪个阶段,这个异步API依赖于C/C++处理 JavaScript 他的callbakcs会立即执行,直到执行完,eventloop才会正常工作(如果nextTick递归调用则会死循环) 为什么会出现这种设计? 出于所有接口都应该异步的设计思路 12345function apiCall(arg, callback) { if (typeof arg !== 'string') return process.nextTick(callback, new TypeError('argument should be string'));} 代码段会校验参数,如果不正确,它将会把错误传递给回调。该API最近更新,允许传任何参给process.nextTick(),所以你不需要嵌套。仅在剩余代码执行之后我们会把错误反馈给用户,通过nextTick,我们保证apiCal()始终在用户胜于代码之后及eventloop继续之前,执行。为了达到这个目标,JS栈内存允许展开并且立即执行提供的callback,似的nextTick递归不会有报错。 process.nextTick() vs setImmediate() process.nextTick()立刻执行 setImmediate()下次tick执行 为什么需要process.nextTick() 允许用户处理errors,清理不需要的资源,事件循环前 尝试重新连接 有时有必要在eventloop继续之前,在call stack unwound之后,让callback执行 12345const server = net.createServer();server.on('connection', (conn) => { });server.listen(8080);server.on('listening', () => { }); listen()的callback调用的是setImmiate(),除非传递Hostname,否则立即绑定端口。为了保证eventloop继续,他必须进入poll phase,这意味着,存在可能已经收到了连接,从而允许在侦听事件之前触发连接事件","raw":null,"content":null,"categories":[{"name":"Node","slug":"Node","permalink":"http://zehai.info/categories/Node/"}],"tags":[{"name":"整理","slug":"整理","permalink":"http://zehai.info/tags/%E6%95%B4%E7%90%86/"}]},{"title":"LeetCodeWeek2","slug":"2020-04-08-LeetCodeWeek2","date":"2020-04-08T08:44:26.000Z","updated":"2021-07-27T07:09:41.895Z","comments":true,"path":"2020/04/08/2020-04-08-LeetCodeWeek2/","link":"","permalink":"http://zehai.info/2020/04/08/2020-04-08-LeetCodeWeek2/","excerpt":"","text":"Prolem876-Submission DetailGiven a non-empty, singly linked list with head node head, return a middle node of linked list. If there are two middle nodes, return the second middle node. Example 1: 12345Input: [1,2,3,4,5]Output: Node 3 from this list (Serialization: [3,4,5])The returned node has value 3. (The judge's serialization of this node is [3,4,5]).Note that we returned a ListNode object ans, such that:ans.val = 3, ans.next.val = 4, ans.next.next.val = 5, and ans.next.next.next = NULL. Example 2: 123Input: [1,2,3,4,5,6]Output: Node 4 from this list (Serialization: [4,5,6])Since the list has two middle nodes with values 3 and 4, we return the second one. Note: The number of nodes in the given list will be between 1 and 100. key题目输出单向链表的中间元素,有这么几个思路 O(N)–>遍历放数组,1/2输出return A[t / 2] O(N)–>根据中间特点,mid前进一格,end前进两格 Solution第一次提交:0ms 12345678910111213141516171819202122class Solution { public ListNode middleNode(ListNode head) { ListNode mid = head; ListNode end = head; int i=0; while(end.next!=null){ mid = head.next; ListNode tmp = mid; i++; int j=i; while(j>0){//搞复杂了 if(tmp.next==null)return mid; end = tmp.next; tmp=tmp.next; j--; } head=head.next; } return mid; }} 第二次参考其他代码-提交: 12345678910class Solution { public ListNode middleNode(ListNode head) { ListNode mid = head, end = head; while (mid != null && end.next != null) { mid = mid.next; end = end.next.next; } return mid; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"LeetCodeWeek1","slug":"2020-04-05-LeetCodeWeek1","date":"2020-04-05T14:32:05.000Z","updated":"2021-07-27T07:09:41.894Z","comments":true,"path":"2020/04/05/2020-04-05-LeetCodeWeek1/","link":"","permalink":"http://zehai.info/2020/04/05/2020-04-05-LeetCodeWeek1/","excerpt":"","text":"Problem Single Number好久没有刷题了,刚好遇到LeetCode,30天计划,打算强迫自己完成 Given a non-empty array of integers, every element appears twice except for one. Find that single one. Note: Your algorithm should have a linear runtime complexity. Could you implement it without using extra memory? Example 1: 12Input: [2,2,1]Output: 1 Example 2: 12Input: [4,1,2,1,2]Output: 4 Key思路 第一个思路O(n^2)去做类似于冒泡遍历的办法 借助Array.sort()可以迅速排序,然后O(n)的办法遍历得到结果 (以上是自己的思路,以下为LeetCode代码思考) 通过异或操作迅速比较 通过 Arrays.stream(nums).reduce(0, (x, y) -> x ^ y)来更快迭代每个元素 Array.steam()以下参考CSDN Stream 不是集合元素,它不是数据结构并不保存数据,它是有关算法和计算的,它更像一个高级版本的 Iterator。原始版本的 Iterator,用户只能显式地一个一个遍历元素并对其执行某些操作;高级版本的 Stream,用户只要给出需要对其包含的元素执行什么操作,比如 “过滤掉长度大于 10 的字符串”、“获取每个字符串的首字母”等,Stream 会隐式地在内部进行遍历,做出相应的数据转换。 Stream 就如同一个迭代器(Iterator),单向,不可往复,数据只能遍历一次,遍历过一次后即用尽了,就好比流水从面前流过,一去不复返。 而和迭代器又不同的是,Stream 可以并行化操作,迭代器只能命令式地、串行化操作。顾名思义,当使用串行方式去遍历时,每个 item 读完后再读下一个 item。而使用并行去遍历时,数据会被分成多个段,其中每一个都在不同的线程中处理,然后将结果一起输出。Stream 的并行操作依赖于 Java7 中引入的 Fork/Join 框架(JSR166y)来拆分任务和加速处理过程 简单说,对 Stream 的使用就是实现一个 filter-map-reduce 过程,产生一个最终结果,或者导致一个副作用(side effect)。 (以下为个人理解) 相对于Java中的Stream流,Java中也有,比如Array.reduce(),Array.foreach()等,通过回调函数的方式进行, 异或|=:两个二进制对应位都为0时,结果等于0,否则结果等于1; &=:两个二进制的对应位都为1时,结果为1,否则结果等于0; ^=:两个二进制的对应位相同,结果为0,否则结果为1。 对于这道题来说,[2,2,1] 第零次遍历:init res=0,题目要求找出出现一次的数,所以这个数肯定存在 第一次遍历:res=2 第二次遍历:res=0,因为res^=2(即res=res^2) 第三次遍历:res=1结束遍历 综上:常用^= 以及>>位运算符,C级别的性能 Solution 对于异或方法(0ms) 12345678910class Solution { public int singleNumber(int[] nums) { int result = 0; for (int n : nums) { result ^= n; } return result; }} 自己的方法就不贴了。。==感觉好蠢==写了半天。 Problem Move ZeroesGiven an array nums, write a function to move all 0‘s to the end of it while maintaining the relative order of the non-zero elements. Example: 12Input: [0,1,0,3,12]Output: [1,3,12,0,0] Note: You must do this in-place without making a copy of the array. Minimize the total number of operations. Solution第一版: 123456789101112class Solution { public void moveZeroes(int[] nums) { for(int i=0;i<nums.length;i++){ if(nums[i]==0){ for(int j=i;j<nums.length-1;j++){ nums[j]=nums[j+1]; } nums[nums.length-1]=0; } } }} 原本根据题目的意思,想法就是找到一个0,整体往前移动一位,一把梭,但写完发现,本身没有必要整体前移,因为我的判断是num[i]是不是为0,所以只需要将0的个数记录下来,非0的元素前移,最后补0就可以了 第二版 1234567891011121314class Solution { public void moveZeroes(int[] nums) { if (nums == null || nums.length == 0) return; int insertPos = 0; for (int num: nums) { if (num != 0) nums[insertPos++] = num; } while (insertPos < nums.length) { nums[insertPos++] = 0; } }} Problem Best Time to Buy and Sell Stock IISay you have an array for which the ith element is the price of a given stock on day i. Design an algorithm to find the maximum profit. You may complete as many transactions as you like (i.e., buy one and sell one share of the stock multiple times). Note: You may not engage in multiple transactions at the same time (i.e., you must sell the stock before you buy again). Example 1: Input: [7,1,5,3,6,4]Output: 7Explanation: Buy on day 2 (price = 1) and sell on day 3 (price = 5), profit = 5-1 = 4. Then buy on day 4 (price = 3) and sell on day 5 (price = 6), profit = 6-3 = 3.Example 2: Input: [1,2,3,4,5]Output: 4Explanation: Buy on day 1 (price = 1) and sell on day 5 (price = 5), profit = 5-1 = 4. Note that you cannot buy on day 1, buy on day 2 and sell them later, as you are engaging multiple transactions at the same time. You must sell before buying again.Example 3: Input: [7,6,4,3,1]Output: 0Explanation: In this case, no transaction is done, i.e. max profit = 0. key题目获取最大利润,本以为是通过动态规划DP来做,但是仔细一想,差值就能解决问题1234567891011class Solution { public int maxProfit(int[] prices) { int res = 0; for (int i = 0; i < prices.length - 1; ++i) { if (prices[i] < prices[i + 1]) { res += prices[i + 1] - prices[i]; } } return res; }} Problem happy NumberWrite an algorithm to determine if a number is “happy”. A happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers. Example: 1234567Input: 19Output: trueExplanation: 12 + 92 = 8282 + 22 = 6862 + 82 = 10012 + 02 + 02 = 1 Solution第一版 123456789101112131415161718class Solution { public boolean isHappy(int n) { int sum =0; while (sum != 1) { if(sum!=0){ n=sum;sum=0; } while (n > 0) { int t = n % 10; sum += t * t; n /= 10; } if(sum==0)return false; } return true; }} 其实写完这个框架我就想起来了,可能在计算上存在死循环,就比如 如果这样的题目就进入了死循环,所以干脆直接通过hashset的方式进行过滤 添加了 12345if(set.contains(sum)){ return false;}else{ set.add(sum);} 整体代码如下: Runtime: 5 ms, faster than 9.41% of Java online submissions for Happy Number. 12345678910111213141516171819202122public boolean isHappy(int n) { Set<Integer> set = new HashSet<>(); int sum =0; while (sum != 1) { if(sum!=0){ n=sum;sum=0; } while (n > 0) { int t = n % 10; sum += t * t; n /= 10; } if(sum==0)return false; if(set.contains(sum)){ return false; }else{ set.add(sum); } } return true;}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"UniqueBinarySearchTrees","slug":"2020-03-22-UniqueBinarySearchTrees","date":"2020-03-22T04:37:47.000Z","updated":"2021-07-27T07:09:41.894Z","comments":true,"path":"2020/03/22/2020-03-22-UniqueBinarySearchTrees/","link":"","permalink":"http://zehai.info/2020/03/22/2020-03-22-UniqueBinarySearchTrees/","excerpt":"","text":"Problem 96Given n, how many structurally unique BST’s (binary search trees) that store values 1 … n? Example: 12345678910Input: 3Output: 5Explanation:Given n = 3, there are a total of 5 unique BST's: 1 3 3 2 1 \\ / / / \\ \\ 3 2 1 1 3 2 / / \\ \\ 2 1 2 3 Solution题目其实相对比较简单,给出1~n,给出能够成的BST的数目,题目一开始的想法是用1~n去生成BST,看一下有多少种情况,然后做了很多无用功=.= 越写越不对劲后来查了一下,这道题是有数学规律的 BST有几个特点 中序遍历依次增(大于等于) 左右自述也是BST(recursion) 所以在i作为根节点时,左子树i-1个节点,右子树n-i个节点 数学的思想在于唯一二叉树的个数为左子树结点的个数乘以右子树的个数。而根节点可以从1到n 中选择,所以有 for(int i=1;i<=n;++i) sum+=numTrees(i-1)*numTrees(n-i); 再加上边际控制n<=1–>sum=1 就有了解题的代码: 12345678910class Solution { public int numTrees(int n) { if(n<=1) return 1; int sum=0; for(int i=1;i<=n;++i) sum+=numTrees(i-1)*numTrees(n-i); return sum; }} Solution 95 Unique Binary Search Trees II万幸,自己折腾的生成BST的代码没白写 Given an integer n, generate all structurally unique BST’s (binary search trees) that store values 1 … n. Example: 1234567891011121314151617Input: 3Output:[ [1,null,3,2], [3,2,null,1], [3,1,null,null,2], [2,1,3], [1,null,2,null,3]]Explanation:The above output corresponds to the 5 unique BST's shown below: 1 3 3 2 1 \\ / / / \\ \\ 3 2 1 1 3 2 / / \\ \\ 2 1 2 3 看题目是前序遍历,我们从上向下查找,外面一层大循环遍历根节点 for(int i=start ;i<=end;i++){} 确定了i节点后可以通过递归写出根节点i的情况下的左右子树 List leftChild = recursion(start, i - 1); List rightChild = recursion(i + 1, end); 然后遍历左右子树的每个元素,两层for循环嵌套 for(TreeNode left : leftChild) { for(TreeNode right : rightChild) { TreeNode root = new TreeNode(i); root.left = left; root.right = right; res.add(root); } } 得到最后的res进行返回,以及处理一下start>end的边际条件就完成了 123456789101112131415161718192021222324252627282930313233343536/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */class Solution { public List<TreeNode> generateTrees(int n) { if(n < 1) return new ArrayList<TreeNode>(); return recursion(1, n); } public List<TreeNode> recursion(int start,int end){ List<TreeNode> res = new ArrayList(); if(start > end) { res.add(null); return res; } for(int i = start;i<=end;i++){ List<TreeNode> leftChild = recursion(start, i - 1); List<TreeNode> rightChild = recursion(i + 1, end); for(TreeNode left : leftChild) { for(TreeNode right : rightChild) { TreeNode root = new TreeNode(i); root.left = left; root.right = right; res.add(root); } } } return res; }} 问题当时卡在 List leftChild = recursion(start, i - 1);List rightChild = recursion(i + 1, end); 当然采用recursion虽然简洁易懂,但两条题目的复杂度都相对较高,是递归的压栈造成的,很多可能相同点的地方可能计算了两遍,导致了两道题目都是打败了5%的solution,当然我们可以通过dp(来自LeetCode)的方式来进行完成 12345678910111213141516171819202122232425262728293031323334class Solution { public List<TreeNode> generateTrees(int n) { if(n == 0) return new ArrayList<>(); List<TreeNode>[][] dp = new ArrayList[n][n]; return helper(1, n, dp); } private List<TreeNode> helper(int start, int end, List<TreeNode>[][] dp){ List<TreeNode> res = new ArrayList<>(); if(start > end){ res.add(null); return res; } if(dp[start - 1][end - 1] != null && !dp[start - 1][end - 1].isEmpty()){ return dp[start - 1][end - 1]; } for (int i = start ; i <= end ; i++) { List<TreeNode> left = helper(start, i - 1, dp); List<TreeNode> right = helper(i + 1, end, dp); for(TreeNode a : left){ for(TreeNode b : right){ TreeNode node = new TreeNode(i); node.left = a; node.right = b; res.add(node); } } } return dp[start - 1][end - 1] = res; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"QUIC","slug":"2020-03-16-QUIC","date":"2020-03-16T10:24:42.000Z","updated":"2021-07-27T07:09:41.894Z","comments":true,"path":"2020/03/16/2020-03-16-QUIC/","link":"","permalink":"http://zehai.info/2020/03/16/2020-03-16-QUIC/","excerpt":"","text":"What快速UDP网络连接(Quick UDP Internet Connections,QUIC) 是一种实验性的传输层网络传输协议,由Google开发,在2013年实现。QUIC使用UDP协议,它在两个端点间创建连线,且支持多路复用连线。在设计之初,QUIC希望能够提供等同于SSL/TLS层级的网络安全保护,减少数据传输及创建连线时的延迟时间,双向控制带宽,以避免网络拥塞。Google希望使用这个协议来取代TCP协议,使网页传输速度加快。2018年10月,IETF的HTTP及QUIC工作小组正式将基于QUIC协议的HTTP(HTTP over QUIC)重命名为HTTP/3以为确立下一代规范做准备。 Featurecompared with HTTP2+TCP+TLS 无TCP握手及TLS握手–>快 改进的拥塞控制 避免队头阻塞的多路复用 前向冗余纠错 Reason 中间设备僵化(防火墙,NAT等硬件设备固话443,80端口,NAT擦写地址,抛弃不认识的选项字段等旧规则) 依赖操作系统实现导致的协议僵化(依赖底层TCP很难快迭代) 建立连接的握手延迟大(HTTPS/2 use TSL 使得TCP,TLS握手时间较长) 队头阻塞(序号顺序接受,前面丢了后面接受直接丢弃) WhyRTT0RTT (0次Round-Trip Time,0次往返)建连可以说是 QUIC 相比 HTTP2 最大的性能优势。那什么是 0RTT 建连呢?这里面有两层含义。 传输层 0RTT 就能建立连接。 加密层 0RTT 就能建立加密连接。 一个完整的 TLS 握手需要两次: Client 发送 ClientHello;Server 回复 ServerHello Client 回复最终确定的 Key,Finished;Server 回复 Finished 握手完毕,Client 发送加密后的 HTTP 请求;Server 回复加密后的 HTTP 响应 TLS Session Resumption Client 发送 ClientHello(包含 Session ID);Server 回复 ServerHello 和 Finished 握手完毕,Client 发送加密后的 HTTP 请求;Server 回复加密后的 HTTP 响应 TLS 0RTT 0 RTT 是 TLSv1.3 的可选功能。客户端和服务器第一次建立会话时,会生成一个 PSK(pre-shared key)。服务器会用 ticket key 去加密 PSK,作为 Session Ticket 返回。 客户端再次和服务器建立会话时,会先用 PSK 去加密 HTTP 请求,然后把加密后的内容发给服务器。服务器解密 PSK,然后再用 PSK 去解密 HTTP 请求,并加密 HTTP 响应。 HTTPS 握手已经跟 HTTP 请求合并到一起 1.Client 发送 ClientHello(包含 PSK)和加密后的 HTTP 请求;Server 回复 ServerHello 和 Finished 和加密后的 HTTP 响应。 congestion controlTCP采用了 慢启动 拥塞避免 快重传 快恢复 QUCI默认支持Cubic,另外支持CubicBytes,Reno,RenoBytes,BBR,PCC Pluggable可插拔,即灵活生效不需要重启或改变底层 应用层实现不同的拥塞控制算法,不需要底层支持 单个应用程序的不同连接支持不同的拥塞控制,如BBR,Cubic 应用程序无需变动直接变更拥塞控制,reload生效 STGW在配置层面进行了优化,针对不同业务,不同网络芝士,不同RTT,使用不同拥塞控制 单递增的Packet Number为了保障TCP的可靠性,使用Seq(sequenceNumber 序号)和ack来确认,N丢失,重传N(问题:N如果重传两次,收到一个ACK,不知道是哪个的ACK) QUIC使用PacketNumber代替seq,并且packetnumber严格递增,也就是说就算 Packet N 丢失了,重传的 Packet N 的 Packet Number 已经不是 N,而是一个比 N 大的值,另外支持Stream offset更好支持多个packet传输 不允许Renegingreneging:TCP通信时,如果发送序列中间某个数据包丢失,TCP会通过重传最后确认的包开始的后续包,这样原先已经正确传输的包也可能重复发送,急剧降低了TCP性能。 为改善这种情况,发展出SACK(Selective Acknowledgment, 选择性确认)技术,使TCP只重新发送丢失的包,不用发送后续所有的包,而且提供相应机制使接收方能告诉发送方哪些数据丢失,哪些数据重发了,哪些数据已经提前收到等 QUIC禁止reneging 更多的ack块TCP 的 Sack 选项能够告诉发送方已经接收到的连续 Segment 的范围,方便发送方进行选择性重传。 由于 TCP 头部最大只有 60 个字节,标准头部占用了 20 字节,所以 Tcp Option 最大长度只有 40 字节,再加上 Tcp Timestamp option 占用了 10 个字节 [25],所以留给 Sack 选项的只有 30 个字节。 每一个 Sack Block 的长度是 8 个,加上 Sack Option 头部 2 个字节,也就意味着 Tcp Sack Option 最大只能提供 3 个 Block。 但是 Quic Ack Frame 可以同时提供 256 个 Ack Block,在丢包率比较高的网络下,更多的 Sack Block 可以提升网络的恢复速度,减少重传量。 ack delay时间收到客户端请求到响应的过程时间成为ack delay,QUIC的RTT需要减掉ack delay(计算我是没看懂。。。) 基于stream和connection级别的流量控制作用: stream可以认为是一条HTTP请求 Connection可以类比一条TCP连接,在connection上存在多条stream tcp承载多个http请求 window_update告诉对方自己接受的字节数 blockFrame告诉对方由于流量控制被阻塞,无法发送数据 stream可用窗口=最大窗口数-收到的最大偏移数 connection可用窗口=$\\sum$streams可用窗口 没有队头阻塞的多路复用QUIC 的多路复用和 HTTP2 类似。在一条 QUIC 连接上可以并发发送多个 HTTP 请求 (stream)。但是 QUIC 的多路复用相比 HTTP2 有一个很大的优势。 QUIC 一个连接上的多个 stream 之间没有依赖。这样假如 stream2 丢了一个 udp packet,也只会影响 stream2 的处理。不会影响 stream2 之前及之后的 stream 的处理。 这也就在很大程度上缓解甚至消除了队头阻塞的影响。 HTTP2 在一个 TCP 连接上同时发送 4 个 Stream。其中 Stream1 已经正确到达,并被应用层读取。但是 Stream2 的第三个 tcp segment 丢失了,TCP 为了保证数据的可靠性,需要发送端重传第 3 个 segment 才能通知应用层读取接下去的数据,虽然这个时候 Stream3 和 Stream4 的全部数据已经到达了接收端,但都被阻塞住了。 不仅如此,由于 HTTP2 强制使用 TLS,还存在一个 TLS 协议层面的队头阻塞 Record 是 TLS 协议处理的最小单位,最大不能超过 16K,一些服务器比如 Nginx 默认的大小就是 16K。由于一个 record 必须经过数据一致性校验才能进行加解密,所以一个 16K 的 record,就算丢了一个字节,也会导致已经接收到的 15.99K 数据无法处理,因为它不完整。 那 QUIC 多路复用为什么能避免上述问题呢? QUIC 最基本的传输单元是 Packet,不会超过 MTU 的大小,整个加密和认证过程都是基于 Packet 的,不会跨越多个 Packet。这样就能避免 TLS 协议存在的队头阻塞。 Stream 之间相互独立,比如 Stream2 丢了一个 Pakcet,不会影响 Stream3 和 Stream4。不存在 TCP 队头阻塞。 当然,并不是所有的 QUIC 数据都不会受到队头阻塞的影响,比如 QUIC 当前也是使用 Hpack 压缩算法 [10],由于算法的限制,丢失一个头部数据时,可能遇到队头阻塞。 总体来说,QUIC 在传输大量数据时,比如视频,受到队头阻塞的影响很小。 加密认证的报文TCP 协议头部没有经过任何加密和认证,所以在传输过程中很容易被中间网络设备篡改,注入和窃听。比如修改序列号、滑动窗口。这些行为有可能是出于性能优化,也有可能是主动攻击。 但是 QUIC 的 packet 可以说是武装到了牙齿。除了个别报文比如 PUBLIC_RESET 和 CHLO,所有报文头部都是经过认证的,报文 Body 都是经过加密的。 这样只要对 QUIC 报文任何修改,接收端都能够及时发现,有效地降低了安全风险。 连接迁移一条 TCP 连接 [17] 是由四元组标识的(源 IP,源端口,目的 IP,目的端口),当其中任何一个元素发生变化时,这条连接依然维持着,能够保持业务逻辑不中断 比如大家使用手机在 WIFI 和 4G 移动网络切换时,客户端的 IP 肯定会发生变化,需要重新建立和服务端的 TCP 连接。 又比如大家使用公共 NAT 出口时,有些连接竞争时需要重新绑定端口,导致客户端的端口发生变化,同样需要重新建立 TCP 连接。 针对 TCP 的连接变化,MPTCP[5] 其实已经有了解决方案,但是由于 MPTCP 需要操作系统及网络协议栈支持,部署阻力非常大,目前并不适用。 所以从 TCP 连接的角度来讲,这个问题是无解的。 那 QUIC 是如何做到连接迁移呢?很简单,任何一条 QUIC 连接不再以 IP 及端口四元组标识,而是以一个64 位的随机数作为 ID 来标识,这样就算 IP 或者端口发生变化时,只要 ID 不变,这条连接依然维持着,上层业务逻辑感知不到变化,不会中断,也就不需要重连。 由于这个 ID 是客户端随机产生的,并且长度有 64 位,所以冲突概率非常低。 其他此外,QUIC 还能实现前向冗余纠错,在重要的包比如握手消息发生丢失时,能够根据冗余信息还原出握手消息。 QUIC 还能实现证书压缩,减少证书传输量,针对包头进行验证等。","raw":null,"content":null,"categories":[{"name":"Introduction","slug":"Introduction","permalink":"http://zehai.info/categories/Introduction/"}],"tags":[{"name":"QUIC","slug":"QUIC","permalink":"http://zehai.info/tags/QUIC/"}]},{"title":"Traversal","slug":"2020-03-15-BinaryTreeLevelOrderTraversal","date":"2020-03-15T09:23:39.000Z","updated":"2021-07-27T07:09:41.894Z","comments":true,"path":"2020/03/15/2020-03-15-BinaryTreeLevelOrderTraversal/","link":"","permalink":"http://zehai.info/2020/03/15/2020-03-15-BinaryTreeLevelOrderTraversal/","excerpt":"","text":"Problem 102 107Given a binary tree, return the level order traversal of its nodes’ values. (ie, from left to right, level by level). For example:Given binary tree [3,9,20,null,null,15,7], 12345 3 / \\9 20 / \\ 15 7 return its level order traversal as: 12345[ [3], [9,20], [15,7]] Solutionkey: 层序遍历 递归 在Java中可以先定义一个List保存结果,List里面再嵌入ArrayList来记录每一层的数据 List<List> res = new ArrayList<>(); res.add(new ArrayList<>()); 将递归中的root节点追加进入res.get(level)的数组中 res.get(level).add(root.val); 通过递归完成算法 travelsal(root.left,level+1);travelsal(root.right,level+1); 12345678910111213141516171819202122232425262728/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */class Solution { List<List<Integer>> res = new ArrayList<>(); public List<List<Integer>> levelOrder(TreeNode root) { travelsal(root, 0); return res; } private void travelsal(TreeNode root,int level) { if(root==null){ return; } if(level==res.size()){ res.add(new ArrayList<>()); } res.get(level).add(root.val); travelsal(root.left,level+1); travelsal(root.right,level+1); }} 接下来是107,是102的变种,改成了叶节点开始遍历 difficulty:Easy Given a binary tree, return the bottom-up level order traversal of its nodes’ values. (ie, from left to right, level by level from leaf to root). For example:Given binary tree [3,9,20,null,null,15,7], 12345 3 / \\9 20 / \\ 15 7 return its bottom-up level order traversal as: 123456[ [15,7], [9,20], [3]] key题目本身没有设置太多的难度,我们只需要将level实现数组的内层数组的倒序就可以了 res.get(level).add(root.val); change this code to res.get(res.size()-i-1).add(root.val); 原本判断新增数组的语句变成在第0个位置新增一个数组 if(i >= res.size()){ res.add(0,new ArrayList());}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"Egg插件到底封装了啥","slug":"2020-03-13-Egg插件到底封装了啥","date":"2020-03-13T05:23:12.000Z","updated":"2021-07-27T07:09:41.893Z","comments":true,"path":"2020/03/13/2020-03-13-Egg插件到底封装了啥/","link":"","permalink":"http://zehai.info/2020/03/13/2020-03-13-Egg%E6%8F%92%E4%BB%B6%E5%88%B0%E5%BA%95%E5%B0%81%E8%A3%85%E4%BA%86%E5%95%A5/","excerpt":"","text":"迫于比较好奇,下载了egg-redis,看看他如何将node直接可以引用的包,封装成为egg的插件 核心代码通过","raw":null,"content":null,"categories":[],"tags":[]},{"title":"","slug":"2020-03-11-MaximumDepthOfBinaryTree","date":"2020-03-11T09:47:55.000Z","updated":"2021-07-27T07:09:41.893Z","comments":true,"path":"2020/03/11/2020-03-11-MaximumDepthOfBinaryTree/","link":"","permalink":"http://zehai.info/2020/03/11/2020-03-11-MaximumDepthOfBinaryTree/","excerpt":"","text":"Prolem 104Given a binary tree, find its maximum depth. The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node. Note: A leaf is a node with no children. Example: Given binary tree [3,9,20,null,null,15,7], 12345 3 / \\9 20 / \\ 15 7 return its depth = 3. key判断树的深浅,采用 int left = max(root.left);int right = max(root.right);return Math.max(left,right) + 1; //或者简写 return Math.max(max(root.left) + 1, max(root.right) + 1); 进行递归 Runtime: 0 ms, faster than 100.00% of Java online submissions for Maximum Depth of Binary Tree. Memory Usage: 39.2 MB, less than 94.62% of Java online submissions for Maximum Depth of Binary Tree. 1234567891011121314151617181920/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */class Solution { public int maxDepth(TreeNode root) { return max(root); } public int max(TreeNode root){ if (root == null) { return 0; } return Math.max(max(root.left) + 1, max(root.right) + 1); }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"SymmetricTree","slug":"2020-03-10-SymmetricTree","date":"2020-03-10T10:33:33.000Z","updated":"2021-07-27T07:09:41.893Z","comments":true,"path":"2020/03/10/2020-03-10-SymmetricTree/","link":"","permalink":"http://zehai.info/2020/03/10/2020-03-10-SymmetricTree/","excerpt":"","text":"Problem101Given a binary tree, check whether it is a mirror of itself (ie, symmetric around its center). For example, this binary tree [1,2,2,3,4,4,3] is symmetric: 12345 1 / \\ 2 2 / \\ / \\3 4 4 3 But the following [1,2,2,null,3,null,3] is not: 12345 1 / \\2 2 \\ \\ 3 3 Note:Bonus points if you could solve it both recursively and iteratively. key一道验证树是否是对称的问题,主要采取递归的方法 1234567891011121314151617181920/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */class Solution { public boolean isSymmetric(TreeNode root) { return isMirror(root,root); } public boolean isMirror(TreeNode root,TreeNode self){ if(root==null && self==null)return true; if(root==null ||self==null) return false; return root.val==self.val && isMirror(root.left,self.right)&&isMirror(root.right,self.left); }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"AMDvsCMD","slug":"2020-03-09-AMDvsCMD","date":"2020-03-09T02:46:11.000Z","updated":"2021-07-27T07:09:41.892Z","comments":true,"path":"2020/03/09/2020-03-09-AMDvsCMD/","link":"","permalink":"http://zehai.info/2020/03/09/2020-03-09-AMDvsCMD/","excerpt":"","text":"AMD:Asynchronous Module Definition (RequireJS) CMD:Common Module Definition(SeaJS) AMD CMD 1. 提前执行 延迟执行(类似饿汉模式) 2. 依赖前置 依赖就近 3. 浏览器(加载缓慢,异步load更好) 服务器端 4. 异步模块定义 通用模块定义 AMD待补充,import-export CMDdefine Function一个文件就是一个模块,在我们的代码外层,会套上一层CMD规范,这也就是为什么我们可以直接引用require,export,module的原因 123define(function(require, exports, module) { // code}); 单个参数 1234define(factory)param-->factory:funtion|Object|Stringdefine({ "foo": "bar" });define('I am a template. My name is {{name}}.'); 多个参数define define(id?, deps?, factory) 12345define('hello', ['jquery'], function(require, exports, module) { // code});id:String模块标识deps:Array模块依赖 define.cmd Object 1234if (typeof define === "function" && define.cmd) { // 有 Sea.js 等 CMD 模块加载器存在}//用来判断当前页面是否有CMD模块加载器 require Function同步加载 123456789define(function(require, exports) { // 获取模块 a 的接口 var a = require('./a'); // 调用模块 a 的方法 a.doSomething();}); require.async Function异步加载 1234567891011121314define(function(require, exports, module) { // 异步加载一个模块,在加载完成时,执行回调 require.async('./b', function(b) { b.doSomething(); }); // 异步加载多个模块,在加载完成时,执行回调 require.async(['./c', './d'], function(c, d) { c.doSomething(); d.doSomething(); });}); require.resolve返回解析后的绝对路径 exprotsreturn Object,对外提供接口 1234567891011121314151617181920212223242526define(function(require, exports) { // 对外提供 foo 属性 exports.foo = 'bar'; // 对外提供 doSomething 方法 exports.doSomething = function() {};});retrun可以实现同等效果define(function(require) { // 通过 return 直接提供接口 return { foo: 'bar', doSomething: function() {} };});以及个人不太喜欢的缩略写法define({ foo: 'bar', doSomething: function() {}}); 但以下写法是错误的 123456789define(function(require, exports) { // 错误用法!!! exports = { foo: 'bar', doSomething: function() {} };}); exports 仅仅是 module.exports 的一个引用。在 factory 内部给 exports 重新赋值时,并不会改变 module.exports 的值。因此给 exports 赋值是无效的,不能用来更改模块接口。 我说句简单的话:exports和module.exports,都是地址,指向同一个内容,如果你给exports赋值了一个新对象,他指向的内容就完全变了,和module.exprots就指向不是同一个地方了 modulemodeule是一个对象,存储与当前模块相关联的一些属性和方法,默认为{} module:function module.id:String模块标识 module.url:String返回绝对路径(默认id=url,除非手写id) module.dependencies:Array模块依赖 module.export:Object 大部分情况下和exports通用,但如果模块是一个类,就应该直接赋值给module.exports,这样调用就是一个类的构造器,可以直接new实例 12345678module.exports=new Person();const p = require(./xxx.js);p.say();//orexports.p = new Person();const {p} = require(./xxxjs);p.say();","raw":null,"content":null,"categories":[{"name":"JavaScript","slug":"JavaScript","permalink":"http://zehai.info/categories/JavaScript/"}],"tags":[{"name":"other","slug":"other","permalink":"http://zehai.info/tags/other/"}]},{"title":"Construct Binary Tree from Preorder and Inorder Traversal","slug":"2020-03-08-Construct Binary Tree from Preorder and Inorder Traversal","date":"2020-03-08T03:31:16.000Z","updated":"2021-07-27T07:09:41.892Z","comments":true,"path":"2020/03/08/2020-03-08-Construct Binary Tree from Preorder and Inorder Traversal/","link":"","permalink":"http://zehai.info/2020/03/08/2020-03-08-Construct%20Binary%20Tree%20from%20Preorder%20and%20Inorder%20Traversal/","excerpt":"","text":"Problem105Given preorder and inorder traversal of a tree, construct the binary tree. Note:You may assume that duplicates do not exist in the tree. For example, given 12preorder = [3,9,20,15,7]inorder = [9,3,15,20,7] Return the following binary tree: 12345 3 / \\9 20 / \\ 15 7 key 题目是一个根据前序中序,生成二叉树的题目 前序遍历有个特点:根节点在前面,root -left-right 则遍历到3作为root,根据中序可以知道左子树是9,右子树是15 20 7 然后遍历9作为root,根据中序得到没有左子树,没有右子树 然后遍历20作为root,依次类推可以得到 123TreeNode root = new TreeNode(rootVal);root.left = buildTree(pre, preStart+1, preStart+len, in, inStart, rootIndex-1);root.right = buildTree(pre, preStart+len+1, preEnd, in, rootIndex+1, inEnd); 其中insort比较好理解,确定root后 左子树在inStart, rootIndex-1之间 右子树在rootIndex+1, inEnd之间 对于presort int len = rootIndex - inStart;获得root的左子树长度(根据中序获取rootIndex) 左子树在preStart+1, preStart+len之间 右子树在preStart+len+1, preEnd之间 Solution12345678910111213141516171819202122232425262728293031323334/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */class Solution { public TreeNode buildTree(int[] preorder, int[] inorder) { return buildTree(preorder, 0, preorder.length-1, inorder, 0, inorder.length-1); } public TreeNode buildTree(int[] pre, int preStart, int preEnd, int[] in, int inStart, int inEnd){ if(inStart > inEnd || preStart > preEnd) return null; int rootVal = pre[preStart]; int rootIndex = 0; for(int i = inStart; i <= inEnd; i++){ if(in[i] == rootVal){ rootIndex = i; break; } } int len = rootIndex - inStart; TreeNode root = new TreeNode(rootVal); root.left = buildTree(pre, preStart+1, preStart+len, in, inStart, rootIndex-1); root.right = buildTree(pre, preStart+len+1, preEnd, in, rootIndex+1, inEnd); return root; }} tip参考于百度,在递归条件乱了","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"BinaryTreeInorderTraversal","slug":"2020-03-06-BinaryTreeInorderTraversal","date":"2020-03-06T03:55:16.000Z","updated":"2021-07-27T07:09:41.891Z","comments":true,"path":"2020/03/06/2020-03-06-BinaryTreeInorderTraversal/","link":"","permalink":"http://zehai.info/2020/03/06/2020-03-06-BinaryTreeInorderTraversal/","excerpt":"","text":"Problem94Given a binary tree, return the inorder traversal of its nodes’ values. 给定一二叉树,中序遍历输出 ps:preorder,inorder,postorder,前中后 Keyrecursive approach利用递归解决B树的遍历问题,这种问题的代码其实大同小异,前中后的遍历输出,只需要调整递归部分即可 12345678910111213141516171819202122232425262728//preorderpublic void preorder(node t) if (t != null) { System.out.print(t.value + " "); preorder(t.left); preorder(t.right); }}//inorderpublic void inorder(node t){ if (t != null) { inorder(t.left); System.out.print(t.value + " "); inorder(t.right); }}//postorderpublic void postorder(node t){ if (t != null) { postorder(t.left); postorder(t.right); System.out.print(t.value + " "); }}//leverorder Solution Runtime: 0 ms, faster than 100.00% of Java online submissions for Binary Tree Inorder Traversal. Memory Usage: 37.9 MB, less than 5.11% of Java online submissions for Binary Tree Inorder Traversal. 1234567891011121314151617181920212223/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */class Solution { public List<Integer> inorderTraversal(TreeNode root) { List < Integer > res = new ArrayList < > (); inorder(root, res); return res; } public void inorder(TreeNode root, List < Integer > res) { if (root != null) { inorder(root.left, res); res.add(root.val); inorder(root.right, res); } }} Complexity Analysis Time complexity : O(n)O(n). The time complexity is O(n)O(n) because the recursive function is T(n) = 2 \\cdot T(n/2)+1T(n)=2⋅T(n/2)+1. Space complexity : The worst case space required is O(n)O(n), and in the average case it’s O(\\log n)O(logn) where nn is number of nodes. stacksolution还提供了另外一种方法通过stack pop的方式来完成: https://leetcode.com/problems/binary-tree-inorder-traversal/solution/ Morris同上","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"2020-03-05-hexoNexTv7.7.2","slug":"2020-03-05-hexoNexTv7-7-2","date":"2020-03-05T06:42:08.000Z","updated":"2021-07-27T07:09:41.891Z","comments":true,"path":"2020/03/05/2020-03-05-hexoNexTv7-7-2/","link":"","permalink":"http://zehai.info/2020/03/05/2020-03-05-hexoNexTv7-7-2/","excerpt":"","text":"I find hexo’s theme:nexT v7.7.2 has some new features native dark modewe can set 1darkmode:true to open native dark mode and there are other features like support MathJax v3.0,we use $$ add next_config helper how to update newest version1.git clone https://github.com/theme-next/hexo-theme-next themes/next or in releases to download newest source code 2.copy file to hexo/theme/ such as : /themes/hexo-theme-next-7.7.2/ 3.open hexo’s _config.yml,and change theme’s value to hexo-theme-next-7.7.2 and u change your them successfully 4.update /themes/hexo-theme-next-7.7.2/_config.yml Last , u can create new post to log your daily life 1yarn upgrade caniuse-lite browserslist and these days ,zehai.info ,may Expired ,sad","raw":null,"content":null,"categories":[{"name":"others","slug":"others","permalink":"http://zehai.info/categories/others/"}],"tags":[{"name":"NexT","slug":"NexT","permalink":"http://zehai.info/tags/NexT/"}]},{"title":"2020-02-28-关于今天的一些思考","slug":"2020-02-28-关于今天的一些思考","date":"2020-02-28T14:28:37.000Z","updated":"2021-07-27T07:09:41.890Z","comments":true,"path":"2020/02/28/2020-02-28-关于今天的一些思考/","link":"","permalink":"http://zehai.info/2020/02/28/2020-02-28-%E5%85%B3%E4%BA%8E%E4%BB%8A%E5%A4%A9%E7%9A%84%E4%B8%80%E4%BA%9B%E6%80%9D%E8%80%83/","excerpt":"","text":"今天确实发生了一些事情,避之不谈 让我想起来了之前我在bili遇到的一件事情,一个up主癌症,自己经济能力不是很好,拍了一些很粗糙,没有剪辑过的视频,大意交代了自己得病,没有钱,拍了病历本,化验单,希望大家有能力的捐一点,后来up大概是拿到了一部分钱,具体多少我不太清楚,后来不知道发生了什么,画风开始转变 up视频的下面出现了很多评论 评论up有两个手机,家里有钱,然后up就对焦给大家看了他的两个手机,我记得两个都是红米类似的便宜机器,而且买了很久了 后来又人评论他家多有钱,然后up就拍下了回家和奶奶在一起的场面(当时已经没钱住院,就回家筹钱换医院试试) 后来又有人评论up主根本就没病,出来骗人钱,up就拍视频给人看治疗过程中的病历,化验单,至少我看不出来造假的证据 后来up出院了,买了张车票回家,和一个月前相比头发掉了很多,弹幕里面各种质疑,评论里面一片质疑, 亲身经历,环顾整个过程,我没有给up捐赠一分钱,也没有给予他任何帮助,就看了他整个生病的过程,从开始的加油,变成了一个‘骗子’,人们存在于网络之后,确实可以发表自己对于一件事情的看法,我想我如果是那个up,深陷其中一定很无奈 陈述结束 最后疫情一定会过去的","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2020-02-28-JS相关技术名词","slug":"2020-02-28-JS相关技术名词","date":"2020-02-28T12:07:43.000Z","updated":"2021-07-27T07:09:41.890Z","comments":true,"path":"2020/02/28/2020-02-28-JS相关技术名词/","link":"","permalink":"http://zehai.info/2020/02/28/2020-02-28-JS%E7%9B%B8%E5%85%B3%E6%8A%80%E6%9C%AF%E5%90%8D%E8%AF%8D/","excerpt":"","text":"今天中午有收到Egg团队公开的文件调查,提及了很多技术名词,虽然不一定用到,但我也觉得列举出来会方便大家了解和比较,后续可能更新我用过的部分 代码检查工具 ESLint JSCS JSHint JSDoc Standard TSLint Flow 引入目的:规范代码 ESLint 通过extend继承某一个大类,然后配置rules来进行代码规范 JSCS JSHint JSDoc Standard TSLint Flow 使用感受解决了以下问题 node是一门弱语言,进行校验(非变量类型校验,仅校验变量是否声明,是否可改等) node在use strict模式下,eslint可以校验 团队合作,防止队友挖坑 其实ESLint只是一种语法校验,更多的还有流程上的规范,就像网传阿里的开发规范一样,就好比node中你可以用类的语法糖,也可以用原型,当一件事情有多种实现方式时,需要规范来选择一个普遍公用的,易维护,易扩展的方案 除去语法校验,还有TS的类型校验,比如GIT的分支规范,如master,staging,backup,develop,other branch 转义语言 TS ClojureScript CoffeeScript Dart Elm Scala.js Haxe Nim PureScript Reason 转移语言是2019年聊的比较多的,解决问题: 类型校验,能够很好解决JS开发中,你不知道这个object里面有什么key,或者某个对象里面有什么方法(egg.js实际开发过程中,ctx.service.v1.handlexxx()就ctrl跳转不了,也不会有提示) WEB框架 Express Koa Egg Nest.js Next.js Fastify.js Hapi.js Restify.js Loopback.io Sails.js Midway.js 面试常被问到框架的问题,因为很多公司不会将项目搭建在原生的node服务上 缺少约束,合作模式下,个人有个人的风格 项目配置繁琐,很多东西配置零散堆放 重复造轮子,框架提供较好的轮子 安全事宜,框架处理 etc 一个好的框架事半功倍,express是一个非常轻量的框架 fast unopinionated(干净的) minimalist Egg是一个企业级框架,约定大于配置 Provide capability to customizd framework base on Egg(可扩展) Highly extensible plugin mechanism(插件牛逼) Built-in cluster(多进程牛逼) Based on Koa with high performance(企业级别性能优异) Stable core framework with high test coverage(稳定) Progressive development(业务迭代,代码可以渐进继承) 数据库 MySQL PostgreSql Redis MongoDB SQL Server SQLLite influxdb HBASE TiDB Oracle DB2 数据库是仅此于语言本身,另外的考点了,因为没有一个服务不涉猎存储,而数据库作为系统的数据基础,不仅重要也成为了面试的重点 mysql等关系型数据库,范式,事务,innodb,读写分离,分表 Mongo,Redis等非关系型数据基础类型,聚合等 反向代理 Nginx Tomcat [ ] Apache 解决负载均衡 预处理一些请求,如过滤重复请求 进程管理 Docker PM2 forever naught node-supervisor Supervisord(Unix) docker集大成者,在微服务等场景应用较多 RPC方式 HTTP Thrift gRPC dubbo MQ 开发场景 服务端API SSR应用 Proxy层 BFF层 代码片段,如Spark代码片段 CLI & 工具 tips","raw":null,"content":null,"categories":[],"tags":[{"name":"List","slug":"List","permalink":"http://zehai.info/tags/List/"}]},{"title":"2020-01-31-JS设计模式","slug":"2020-01-31-JS设计模式","date":"2020-01-31T08:39:11.000Z","updated":"2021-07-27T07:09:41.889Z","comments":true,"path":"2020/01/31/2020-01-31-JS设计模式/","link":"","permalink":"http://zehai.info/2020/01/31/2020-01-31-JS%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/","excerpt":"","text":"模式共计八种: 单例模式 构造器模式 建造者模式 代理模式 外观模式 观察者模式 策略模式 迭代器模式 设计模式的提出,为了更好的解耦,可拓展,服务可靠,不限定某种语言的实现 单例模式概念一个类只有一个实例,如果存在就不实例化,如果不存在则new,以保证一个类只有一个实例 作用 模块间通信 保证某个类的对象的唯一性 防止变量污染 注意 this的使用 闭包容易stack over flow需要及时清理 创建新对象成本较高 实际案例如网站的计数器,多线程的线程池 1234567891011121314151617181920212223242526(function(){ // 养鱼游戏 let fish = null function catchFish() { // 如果鱼存在,则直接返回 if(fish) { return fish }else { // 如果鱼不存在,则获取鱼再返回 fish = document.querySelector('#cat') return { fish, water: function() { let water = this.fish.getAttribute('weight') this.fish.setAttribute('weight', ++water) } } } } // 每隔3小时喂一次水 setInterval(() => { catchFish().water() }, 3*60*60*1000)})() 构造器模式","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2020-01-31-RomanToInteger","slug":"2020-01-31-RomanToInteger","date":"2020-01-31T03:17:45.000Z","updated":"2021-07-27T07:09:41.889Z","comments":true,"path":"2020/01/31/2020-01-31-RomanToInteger/","link":"","permalink":"http://zehai.info/2020/01/31/2020-01-31-RomanToInteger/","excerpt":"","text":"Leetcode13Roman numerals are represented by seven different symbols: I, V, X, L, C, D and M. Symbol ValueI 1V 5X 10L 50C 100D 500M 1000For example, two is written as II in Roman numeral, just two one’s added together. Twelve is written as, XII, which is simply X + II. The number twenty seven is written as XXVII, which is XX + V + II. Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not IIII. Instead, the number four is written as IV. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as IX. There are six instances where subtraction is used: I can be placed before V (5) and X (10) to make 4 and 9.X can be placed before L (50) and C (100) to make 40 and 90.C can be placed before D (500) and M (1000) to make 400 and 900.Given a roman numeral, convert it to an integer. Input is guaranteed to be within the range from 1 to 3999. Example 1: Input: “III”Output: 3Example 2: Input: “IV”Output: 4Example 3: Input: “IX”Output: 9Example 4: Input: “LVIII”Output: 58Explanation: L = 50, V= 5, III = 3.Example 5: Input: “MCMXCIV”Output: 1994Explanation: M = 1000, CM = 900, XC = 90 and IV = 4. Solution题目意思其实很简单,掌握了计算方法其实很简单 1234567891011121314151617181920212223242526272829303132333435363738class Solution { public int romanToInt(String s) { int nums[]=new int[s.length()]; for(int i=0;i<s.length();i++){ switch (s.charAt(i)){ case 'M': nums[i]=1000; break; case 'D': nums[i]=500; break; case 'C': nums[i]=100; break; case 'L': nums[i]=50; break; case 'X' : nums[i]=10; break; case 'V': nums[i]=5; break; case 'I': nums[i]=1; break; } } int sum=0; for(int i=0;i<nums.length-1;i++){ if(nums[i]<nums[i+1]) sum-=nums[i]; else sum+=nums[i]; } return sum+nums[nums.length-1]; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"2020-01-31-内网穿透","slug":"2020-01-31-内网穿透","date":"2020-01-31T02:16:51.000Z","updated":"2021-07-27T07:09:41.890Z","comments":true,"path":"2020/01/31/2020-01-31-内网穿透/","link":"","permalink":"http://zehai.info/2020/01/31/2020-01-31-%E5%86%85%E7%BD%91%E7%A9%BF%E9%80%8F/","excerpt":"","text":"why解决公网访问自己的内网设备(大部分公司,小区都是在内网中,IPv4历史原因导致),解决方案: 路由器新增端口映射 花生壳动态解析软件 natapp等免费软件提供的内网映射服务 基于ngrok(不荐)或者frp自建内网映射服务 how目前推荐使用frp搭建穿透服务,支持HTTP,SSH,TCP UDP FTP","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2020-01-18-plugins","slug":"2020-01-18-plugins","date":"2020-01-18T15:58:31.000Z","updated":"2021-07-27T07:09:41.888Z","comments":true,"path":"2020/01/18/2020-01-18-plugins/","link":"","permalink":"http://zehai.info/2020/01/18/2020-01-18-plugins/","excerpt":"","text":"最近更新hexo比较频繁,发现频繁性的推送master分支以及source源文件备份,比较繁琐,查询了官方文档,可以写一些监听函数,实现一些自动化部署,hexo默认将脚本放置在scripts文件夹下,以下代码可以在hexo new的时候自动打开默认编辑软件 12345var spawn = require('child_process').exec;hexo.on('new', function(data){ spawn('start "markdown编辑器绝对路径.exe" ' + data.path);}); 非常的方便,省去了我打开typora的时间 以及以下的代码可以实现自动部署source分支 123456789101112131415161718192021222324252627282930313233343536require('shelljs/global');//记得安装包try { hexo.on('deployAfter', function() {//当deploy完成后执行备份 run(); });} catch (e) { console.log("You make a wrong:" + e.toString());}function run() { if (!which('git')) { echo('Sorry, this script requires git'); exit(1); } else { echo("======================Auto Backup Begin==========================="); cd('./'); if (exec('git add --all').code !== 0) { echo('Error: Git add failed'); exit(1); } if (exec('git commit -am "Form auto backup script\\'s commit"').code !== 0) { echo('Error: Git commit failed'); exit(1); } if (exec('git push origin source').code !== 0) { echo('Error: Git push failed'); exit(1); } echo("==================Auto Backup Complete============================") }} 参考文献https://hexo.io/zh-cn/docs/plugins#%E5%B7%A5%E5%85%B7","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2020-01-17-ImplementStr","slug":"2020-01-17-ImplementStr","date":"2020-01-17T10:25:30.000Z","updated":"2021-07-27T07:09:41.888Z","comments":true,"path":"2020/01/17/2020-01-17-ImplementStr/","link":"","permalink":"http://zehai.info/2020/01/17/2020-01-17-ImplementStr/","excerpt":"","text":"LeetCode28Implement strStr(). Return the index of the first occurrence of needle in haystack, or -1 if needle is not part of haystack. Example 1: Input: haystack = “hello”, needle = “ll”Output: 2Example 2: Input: haystack = “aaaaa”, needle = “bba”Output: -1Clarification: What should we return when needle is an empty string? This is a great question to ask during an interview. For the purpose of this problem, we will return 0 when needle is an empty string. This is consistent to C’s strstr() and Java’s indexOf(). Solution如果不考虑java偷懒的写法当然可以想到indexof的想法123456class Solution { public int strStr(String haystack, String needle) { return haystack.indexOf(needle); }}Runtime: 1 ms先按照题意写了如下代码:1234567891011121314151617181920212223242526272829class Solution { public int strStr(String haystack, String needle) { if(needle.length()==0)return 0; if(haystack.length()==0)return -1; int index =-1; boolean flag = true; for(int i=0;i<haystack.length();i++){ if(haystack.charAt(i)==needle.charAt(0)){ flag=true; for(int j =0;j<needle.length();j++){ if(i+j>=haystack.length()){ return -1; } if(haystack.charAt(i+j)!=needle.charAt(j)){ flag=false; break; }; } if(flag){ return i; } } } return index; }}Runtime: 4 msMemory Usage: 42.7 MB","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"2020-01-15-sqrtx","slug":"2020-01-15-sqrtx","date":"2020-01-15T14:04:46.000Z","updated":"2021-07-27T07:09:41.888Z","comments":true,"path":"2020/01/15/2020-01-15-sqrtx/","link":"","permalink":"http://zehai.info/2020/01/15/2020-01-15-sqrtx/","excerpt":"","text":"LeetCode-69Implement int sqrt(int x). Compute and return the square root of x, where x is guaranteed to be a non-negative integer. Since the return type is an integer, the decimal digits are truncated and only the integer part of the result is returned. Example 1: Input: 4Output: 2Example 2: Input: 8Output: 2Explanation: The square root of 8 is 2.82842…, and since the decimal part is truncated, 2 is returned. Solution就是手写一个根号源码,首先想到的就是通过平方来做 12345678910public int mySqrt(int x) { for(int i=46340;i<46341;i++){ if(x>=(long)i*i&&x<(long)(i+1)*(i+1)){ return i; } } return x; }Runtime: 22 msMemory Usage: 34 MB 如果不遵循题目的要求,使用Math函数,所以我们的目标大概是3ms附近 1234public int mySqrt(int x) { return (int)Math.sqrt(Double.parseDouble(String.valueOf(x))); }Runtime: 3 ms 解法粗暴,遇到大数的时候会从0重新开始计算,复杂度O(N) 第一次优化思路就是避免做两次乘法然后去比较,这个地方可以去优化 12345678910class Solution { public int mySqrt(int x) { long n = 1; while(n * n <= x) { n++; } return (int) n - 1; }}Runtime: 11 ms 第二次优化可以使用二分法来逐步逼近i,没有必要从1开始顺序遍历 12345678910111213141516171819202122class Solution { public int mySqrt(int x) { if (x == 0 || x == 1) return x; int left = 1; int right = x; while (left < right) { int midPoint = (left + right) / 2; if (midPoint == x / midPoint) { return midPoint; } else if (midPoint > x / midPoint) { right = midPoint; } else if (midPoint < x / midPoint) { left = midPoint + 1; } } return left - 1; }}Runtime: 1 ms","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"2020-01-11-SameTree","slug":"2020-01-11-SameTree","date":"2020-01-11T07:45:18.000Z","updated":"2021-07-27T07:09:41.888Z","comments":true,"path":"2020/01/11/2020-01-11-SameTree/","link":"","permalink":"http://zehai.info/2020/01/11/2020-01-11-SameTree/","excerpt":"","text":"LeetCode 10012345678910111213141516171819202122232425262728293031Given two binary trees, write a function to check if they are the same or not.Two binary trees are considered the same if they are structurally identical and the nodes have the same value.Example 1:Input: 1 1 / \\ / \\ 2 3 2 3 [1,2,3], [1,2,3]Output: trueExample 2:Input: 1 1 / \\ 2 2 [1,2], [1,null,2]Output: falseExample 3:Input: 1 1 / \\ / \\ 2 1 1 2 [1,2,1], [1,1,2]Output: false Solution题目其实很简单的一个递归Recursion,我们很轻松可以通过递归来解决1234567891011class Solution { public boolean isSameTree(TreeNode p, TreeNode q) { // p and q are both null if (p == null && q == null) return true; // one of p and q is null if (q == null || p == null) return false; if (p.val != q.val) return false; return isSameTree(p.right, q.right) && isSameTree(p.left, q.left); }}时间复杂度为O(n),控件复杂度为O(logn)~O(n)之间,这道题就不考虑其他解法了,recursion目前看来是最优解","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"2020-01-10-MatrixZero","slug":"2020-01-10-MatrixZero","date":"2020-01-10T15:15:26.000Z","updated":"2021-07-27T07:09:41.887Z","comments":true,"path":"2020/01/10/2020-01-10-MatrixZero/","link":"","permalink":"http://zehai.info/2020/01/10/2020-01-10-MatrixZero/","excerpt":"","text":"LeetCode 731234567891011121314151617181920212223242526272829303132333435Given a m x n matrix, if an element is 0, set its entire row and column to 0. Do it in-place.Example 1:Input: [ [1,1,1], [1,0,1], [1,1,1]]Output: [ [1,0,1], [0,0,0], [1,0,1]]Example 2:Input: [ [0,1,2,0], [3,4,5,2], [1,3,1,5]]Output: [ [0,0,0,0], [0,4,5,0], [0,3,1,0]]Follow up:A straight forward solution using O(mn) space is probably a bad idea.A simple improvement uses O(m + n) space, but still not the best solution.Could you devise a constant space solution? Solution一开始以为递归可以解决,可以将矩阵一层层拆开,写下了如下的代码:123456789101112131415161718192021222324252627282930313233343536373839public void setZeroes(int[][] matrix) { int rows = matrix.length-1; int cols = matrix[0].length-1; regression(matrix, rows>=cols?cols:rows);}public void regression(int[][] matrix,int index){ if(index<0){ return; } boolean flag = false; for(int i =index;i<matrix[0].length;i++){ if(matrix[index][i]==0) { handleZero(matrix,i); flag=true; break; } } if(flag==false){ for(int j =index;j<matrix.length;j++){ if(matrix[j][index]==0) { handleZero(matrix,j); break; } } } regression(matrix, --index);}private void handleZero(int[][] matrix,int pos) { for(int i=matrix[0].length-1;i>=pos;i--){ matrix[pos][i]=0; } for(int j=matrix.length-1;j>=pos;j--){ matrix[j][pos]=0; }}写完后很快发现不能够实现,原因就在于他只能管理到内层,外层标为0后,没办法做额外的标记(其实生产代码可以打一些标记),所以只能抛弃这个本以为很简单的方法,该用了set合集去记录要设置0行列的行号或者列号,这个复杂度并不是很复杂,但是执行完发现代码的效率还是很低,先放代码:12345678910111213141516171819202122232425class Solution { public void setZeroes(int[][] matrix) { int R = matrix.length; int C = matrix[0].length; Set<Integer> rows = new HashSet<Integer>(); Set<Integer> cols = new HashSet<Integer>(); for (int i = 0; i < R; i++) { for (int j = 0; j < C; j++) { if (matrix[i][j] == 0) { rows.add(i); cols.add(j); } } } for (int i = 0; i < R; i++) { for (int j = 0; j < C; j++) { if (rows.contains(i) || cols.contains(j)) { matrix[i][j] = 0; } } } }}代码低效的原因在于动用了两层循环,时间复杂度非常低,题目的置0是有规律的,不是无规律的,所以我开始寻求更新简单的方法,先贴最优解,要睡觉了,我的头发啊 123456789101112131415161718192021222324252627282930313233343536373839404142class Solution { public void setZeroes(int[][] matrix) { int R = matrix.length; int C = matrix[0].length; boolean isCol = false; for(int i=0; i<R; i++) { if (matrix[i][0] == 0) { isCol = true; } for(int j=1; j<C; j++) { if(matrix[i][j]==0) { matrix[0][j] = 0; matrix[i][0] = 0; } } } // Iterate over the array once again and using the first row and first column, update the elements. for(int i=1; i<R; i++) { for(int j=1; j<C; j++) { if(matrix[i][0]==0 || matrix[0][j]==0) { matrix[i][j] = 0; } } } // See if the first row needs to be set to zero as well if(matrix[0][0]==0) { for(int j=0; j<C; j++) { matrix[0][j] = 0; } } // See if the first column needs to be set to zero as well if(isCol) { for(int i=0; i<R; i++) { matrix[i][0] = 0; } } }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"2020-01-09-RedisTransaction","slug":"2020-01-09-RidisTransaction","date":"2020-01-09T14:42:06.000Z","updated":"2021-07-27T07:09:41.887Z","comments":true,"path":"2020/01/09/2020-01-09-RidisTransaction/","link":"","permalink":"http://zehai.info/2020/01/09/2020-01-09-RidisTransaction/","excerpt":"","text":"官网doc:https://redis.io/topics/transactions 本文纯属阅读笔记,无学术参考价值 what事务(transaction)的本质就是处理好几个动作,要么都成功,要么其中一个失败就全部回滚 每门语言都会有事务的支持,node也有async的方法实现事务几个动作串行,或者并行,一个失败全部回滚,之前写过支付的例子,使用async.waterfall,购买会员后 1.查询支付宝返回支付是否成功 2.获取用户所买会员的等级及相关权限 3.将权益插入用户表中 4.将订单数据记录到订单表中,方便后台查看订单量 大致步骤就是这些 Redis主要使用MULTI ,EXEC,DISCARD WATCH来实现事务的功能 遵循以下原则: 所有命令被序列化后顺序执行,且执行期间不接受其他请求,保证隔离性 EXEC命令触发事务中所有命令的执行,因此,如果客户端调用MULTI命令之前失去连接,则不执行任何操作。如果EXEC命令调用过,则所有的命令都会被执行 howMULTI输入事务以OK答复,此时用户可以发送多个命令,Redis都不会执行,而是排队,一旦调用EXEC,则将会执行所有命令,调用DISCARD将刷新(Flush?清空?重新执行?)事务队列并退出事务 示例代码: 123456789> MULTIOK> INCR fooQUEUED> INCR barQUEUED> EXEC1) (integer) 12) (integer) 1 可以看出EXEC返回一个数组,其中每个元素都是事务中单个命令的答复,其发出顺序与命令相同 当Reids连接处于MULTI的请求时,所有的命令都将以字符串queued答复,当EXEC时,将顺序执行 errors可能存在两种命令错误: 命令可能无法排队,因此在EXEC之前可能有错误(包括命令语法错误) 调用EXEC后,命令执行失败 客户端通过检查已排队(queued)的命令返回值来判断第一种错误,另外从2.6.5开始,服务器将记住在命令排队期间发生的错误,并且拒绝执行事务,返回错误并自动丢弃事务 EXEC执行后错误不会特殊处理,所有的命令都将被执及时有些命令失败 12345678910MULTI+OKSET a abc+QUEUEDLPOP a+QUEUEDEXEC*2+OK-ERR Operation against a key holding the wrong kind of value 即时命令失败,队列里的其他命令也会处理 1234{ name:stu time:1}","raw":null,"content":null,"categories":[{"name":"Redis","slug":"Redis","permalink":"http://zehai.info/categories/Redis/"}],"tags":[{"name":"Transaction","slug":"Transaction","permalink":"http://zehai.info/tags/Transaction/"}]},{"title":"2020-01-08-SortColors","slug":"2020-01-08-SortColors","date":"2020-01-08T14:42:06.000Z","updated":"2021-07-27T07:09:41.887Z","comments":true,"path":"2020/01/08/2020-01-08-SortColors/","link":"","permalink":"http://zehai.info/2020/01/08/2020-01-08-SortColors/","excerpt":"","text":"Leetcode-75123456789101112131415Given an array with n objects colored red, white or blue, sort them in-place so that objects of the same color are adjacent, with the colors in the order red, white and blue.Here, we will use the integers 0, 1, and 2 to represent the color red, white, and blue respectively.Note: You are not suppose to use the library's sort function for this problem.Example:Input: [2,0,2,1,1,0]Output: [0,0,1,1,2,2]Follow up:A rather straight forward solution is a two-pass algorithm using counting sort.First, iterate the array counting number of 0's, 1's, and 2's, then overwrite array with total number of 0's, then 1's and followed by 2's.Could you come up with a one-pass algorithm using only constant space? solution题目乍一看非常简单,但确实说使用简单的sort方法以及o(n^2)的排序确实会浪费时间复杂度,本着好奇心,我试了一下,果然成了吊车尾 1234567891011121314class Solution { public void sortColors(int[] nums) { for(int i =0;i<nums.length-1;i++){ for(int j=i+1;j<nums.length;j++){ if(nums[i]>nums[j]){ int tmp=nums[i]; nums[i]=nums[j]; nums[j]=tmp; } } } }}Runtime: 1 ms, faster than 6.35% of Java online submissions for Sort Colors. 该题优化的核心位置是该数组是一个一维数组,设置两个指针,左边遍历0,遇到0往左放,遇到2往右放,r和l为左右分界线,index记录最后一个0的位置1234567891011121314151617181920212223242526272829class Solution { public void sortColors(int[] nums) { int l = 0; int r = nums.length - 1; int index = 0; while(l <= r) { if(nums[l] == 0) { if(l > index) { int tmp = nums[index]; nums[index] = nums[l]; nums[l] = tmp; index++; } else { l++; index++; } } else if(nums[l] == 2) { int tmp = nums[r]; nums[r] = 2; nums[l] = tmp; r--; } else l++; } }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"2020-01-08-MinimunPathSum","slug":"2020-01-08-MinimumPathSum","date":"2020-01-08T14:42:06.000Z","updated":"2021-07-27T07:09:41.887Z","comments":true,"path":"2020/01/08/2020-01-08-MinimumPathSum/","link":"","permalink":"http://zehai.info/2020/01/08/2020-01-08-MinimumPathSum/","excerpt":"","text":"Leetcode-641234567891011121314Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path.Note: You can only move either down or right at any point in time.Example:Input:[ [1,3,1], [1,5,1], [4,2,1]]Output: 7Explanation: Because the path 1→3→1→1→1 minimizes the sum. solution解法为简单的动态规划,只要找到比较该元素,上方和左方的值的最小值,然后与该值相加,就可以得到解 123456789101112class Solution { public int minPathSum(int[][] grid) { for(int i=1; i<grid.length; i++) grid[i][0] += grid[i-1][0]; for(int j=1; j<grid[0].length; j++) grid[0][j] += grid[0][j-1]; for (int i=1; i<grid.length; i++) { for (int j=1; j<grid[0].length; j++) { grid[i][j] = Math.min(grid[i][j-1], grid[i-1][j]) + grid[i][j]; } } return grid[grid.length-1][grid[0].length-1]; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"2020-01-07-关于Promise的思考","slug":"2020-01-07-关于Promise的思考","date":"2020-01-07T14:42:06.000Z","updated":"2021-07-27T07:09:41.887Z","comments":true,"path":"2020/01/07/2020-01-07-关于Promise的思考/","link":"","permalink":"http://zehai.info/2020/01/07/2020-01-07-%E5%85%B3%E4%BA%8EPromise%E7%9A%84%E6%80%9D%E8%80%83/","excerpt":"","text":"题目(这道题在互联网上已经有了) 123可以添加任务,任务包含任务数据,任务延迟触发的等待时间。在任务到达触发时间点时,自动触发执行此任务。队列中任务保持先进先出原则:假设 A 任务的触发等待时间为 X,B 任务的触发等待时间为 Y,B 在 A 之后被添加入队列,则 A 的前驱任务执行完成后等待时间 X 后,才执行 A,同理在 A 执行完成后,等待时间 Y,才执行 B。 思路过程1.Java上线读题目就是延时队列的特征,Java有锁,有多线程,写起来多方便 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081import java.util.concurrent.BlockingQueue;import java.util.concurrent.DelayQueue;import java.util.concurrent.Delayed;import java.util.concurrent.TimeUnit;public class HandWritingQueue { public static void main(String[] args) { final BlockingQueue<DelayedElement> deque = new DelayQueue<>(); Runnable producerRunnable = new Runnable() { int i = 10; public void run() { while (true && i>0) { try { --i; System.out.println("producing "+i+",wait "+i+" seconds"); deque.put(new DelayedElement(1000 * i, "i=" + i)); Thread.sleep(200); } catch (InterruptedException e) { e.printStackTrace(); } } } }; Runnable customerRunnable = new Runnable() { public void run() { while (true) { try { System.out.println("consuming:" + deque.take().msg); //Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } } } }; Runnable getSize= new Runnable() { @Override public void run() { while (true) { System.out.println("size="+deque.size()); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } }; Thread thread1 = new Thread(producerRunnable); thread1.start(); Thread thread2 = new Thread(customerRunnable); thread2.start(); Thread thread3 = new Thread(getSize); thread3.start(); } static class DelayedElement implements Delayed { private final long expire; private final String msg; public DelayedElement(long delay, String msg) { this.msg = msg; expire = System.currentTimeMillis() + delay; } @Override public long getDelay(TimeUnit unit) { return unit.convert(this.expire - System.currentTimeMillis(), TimeUnit.MILLISECONDS); } @Override public int compareTo(Delayed o) { return -1;//FIFO } }} 2.Node上线被提醒该题目可以用node实现,且不需要借助redis来做,然后我上手就是一把操作: 123456789101112131415161718192021222324'use strict'class DelayElement { constructor(data, expire) { this.data = data; this.expire = expire;//second }}const delayArray = [];//push two element in delayArraydelayArray.push(new DelayElement(1, 2));delayArray.push(new DelayElement(2, 1));let length = delayArray.length;let time_cnt = 0;while (delayArray.length > 0) { let de = delayArray.shift(); time_cnt += de.expire;//serial (function () { setTimeout(() => { console.log('expire data is :' + de.data + ',expire time is :' + de.expire); }, time_cnt * 1000); })();} 我以为设计的考点也就是立即执行函数,延时的使用,但是这里的for循环是个伪串行,实际上是并发的,也为第三步的修改提供了bug 3.Promise时代一开始我是想把async函数放进去,写了如下的代码: 1234567891011121314151617'use strict'const delayArray = [];const daPush = (data, expire) => { delayArray.push(async () => { setTimeout(() => { console.log('data is ' + data + ' and expire is ' + expire); }, expire * 1000); });}daPush(1, 4);//2 secondsdaPush(2, 5);(async () => { for (const da of delayArray) { await da(); }})(); 发现代码还是串行的,然后查了一下可能的问题(以下为个人猜测,欢迎指正)async声明的函数会包装成Promise不假,但是for循环会并发去执行await中的async 4.正解 promise执行会阻塞主线程 Macrotasks和Microtasks 都属于上述的异步任务中的一种,他们分别有如下API:macrotasks: setTimeout, setInterval, setImmediate, I/O, UI renderingmicrotasks: process.nextTick, Promise, MutationObserver 任务队列中,在每一次事件循环中,macrotask只会提取一个执行,而microtask会一直提取,直到microsoft队列为空为止。 也就是说如果某个microtask任务被推入到执行中,那么当主线程任务执行完成后,会循环调用该队列任务中的下一个任务来执行,直到该任务队列到最后一个任务为止。 而事件循环每次只会入栈一个macrotask,主线程执行完成该任务后又会检查microtasks队列并完成里面的所有任务后再执行macrotask的任务。 以及macrotask应该对应的是check队列(该行未验证) 123456789101112131415161718192021222324252627282930'use strict'const delayArray = [];const daPush = (data, expire) => { delayArray.push(() => new Promise((resolve,reject) => { setTimeout(() => { if(data) { console.log('data is ' + data + ' and expire is ' + expire); resolve(true); } else{ reject('there is nodata'); } }, expire * 1000); }));};daPush(1, 4);//2 secondsdaPush(2, 5);(async () => { for (const da of delayArray) { da().then((value)=>{ // console.log(value); }).catch((value)=>{ console.log(value); }); //没有28-33,只35行也可以 // await da(); }})();","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2020-01-07-SetTimeout","slug":"2020-01-07-SetTimeout","date":"2020-01-07T05:01:52.000Z","updated":"2021-07-27T07:09:41.886Z","comments":true,"path":"2020/01/07/2020-01-07-SetTimeout/","link":"","permalink":"http://zehai.info/2020/01/07/2020-01-07-SetTimeout/","excerpt":"","text":"执行了一下程序: 12345while(true){ setTimeout(()=>{ console.log(1) },0)} 返回了一下内容: 123456789101112131415161718192021222324252627<--- Last few GCs --->[12308:000001E565C2F6F0] 14167 ms: Mark-sweep 1395.9 (1425.2) -> 1395.9 (1423.7) MB, 1754.1 / 0.0 ms (+ 0.0 ms in 39 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 1764 ms) (average mu = 0.105, current mu = 0.020) a[12308:000001E565C2F6F0] 14175 ms: Scavenge 1397.3 (1423.7) -> 1397.3 (1425.2) MB, 7.0 / 0.0 ms (average mu = 0.105, current mu = 0.020) allocation failure<--- JS stacktrace --->==== JS stack trace ========================================= 0: ExitFrame [pc: 000002AFCABDC5C1]Security context: 0x037b5391e6e9 <JSObject> 1: /* anonymous */ [0000016D4360B9A1] [D:\\working\\h3yun\\test.3.js:~1] [pc=000002AFCAC7210F](this=0x016d4360bad1 <Object map = 000001F79EE82571>,exports=0x016d4360bad1 <Object map = 000001F79EE82571>,require=0x016d4360ba91 <JSFunction require (sfi = 00000397F3EC6A31)>,module=0x016d4360ba09 <Module map = 000001F79EED3DA1>,__filename=0x0397f3ece219 <Strin...FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 00007FF7C7BFC6AA v8::internal::GCIdleTimeHandler::GCIdleTimeHandler+4506 2: 00007FF7C7BD7416 node::MakeCallback+4534 3: 00007FF7C7BD7D90 node_module_register+2032 4: 00007FF7C7EF189E v8::internal::FatalProcessOutOfMemory+846 5: 00007FF7C7EF17CF v8::internal::FatalProcessOutOfMemory+639 6: 00007FF7C80D7F94 v8::internal::Heap::MaxHeapGrowingFactor+9620 7: 00007FF7C80CEF76 v8::internal::ScavengeJob::operator=+24550 8: 00007FF7C80CD5CC v8::internal::ScavengeJob::operator=+17980 9: 00007FF7C80D6317 v8::internal::Heap::MaxHeapGrowingFactor+232710: 00007FF7C80D6396 v8::internal::Heap::MaxHeapGrowingFactor+245411: 00007FF7C8200637 v8::internal::Factory::NewFillerObject+5512: 00007FF7C827D826 v8::internal::operator<<+7349413: 000002AFCABDC5C1 why因为业务代码阻塞住,没有进入timer_handler的循环,所以1虽然进入了timer的红黑树中,但是不可能输出,不像之前for循环会有一个截止条件,后续的定时器还是可以生效的 另外有一个地方记混了,遍历回调的时候,会执行直到回调为空或者最大执行回调数量,而业务代码只会在这里阻塞不会停止,这也是为何出现GC的日志 whatsetimeout是JS前端常用的控件用来延时执行一个函数(回调),当执行业务代码的时候我们会将settimeout,setImmediate,nextTick,setInterval插入timer_handler的不同队列中(详见左侧node分支,且文章也在更新中),当JS单线程执行完业务代码后,才开始eventloop查找观察者来进行回调,当然也存在延时不精确的可能","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2020-01-06-gRPC","slug":"2020-01-06-gRPC","date":"2020-01-06T12:55:51.000Z","updated":"2021-07-27T07:09:41.886Z","comments":true,"path":"2020/01/06/2020-01-06-gRPC/","link":"","permalink":"http://zehai.info/2020/01/06/2020-01-06-gRPC/","excerpt":"","text":"whygRPC是任何环境都可以运行的高性能开源框架,他可以通过pluggable support来高效实现负载均衡,心跳检测和授权,他也可以应用于分布式计算的最后一个流程(连接各个端到后端) 简单的服务定义 快速启动易扩展 跨语言,跨平台 双向流和鉴权 feature gRPC可以通过protobuf来定义接口,从而可以有更加严格的接口约束条件。关于protobuf可以参见笔者之前的小文Google Protobuf简明教程 另外,通过protobuf可以将数据序列化为二进制编码,这会大幅减少需要传输的数据量,从而大幅提高性能。 gRPC可以方便地支持流式通信(理论上通过http2.0就可以使用streaming模式, 但是通常web服务的restful api似乎很少这么用,通常的流式数据应用如视频流,一般都会使用专门的协议如HLS,RTMP等,这些就不是我们通常web服务了,而是有专门的服务器应用。) node123456$ # Clone the repository to get the example code$ git clone -b v1.25.0 https://github.com/grpc/grpc$ # Navigate to the dynamic codegen "hello, world" Node example:$ cd grpc/examples/node/dynamic_codegen$ # Install the example's dependencies$ npm install","raw":null,"content":null,"categories":[{"name":"gRPC","slug":"gRPC","permalink":"http://zehai.info/categories/gRPC/"}],"tags":[{"name":"network","slug":"network","permalink":"http://zehai.info/tags/network/"}]},{"title":"2020-01-03-SearchInsertPosition","slug":"2020-01-03-SearchInsertPosition","date":"2020-01-03T09:01:03.000Z","updated":"2021-07-27T07:09:41.886Z","comments":true,"path":"2020/01/03/2020-01-03-SearchInsertPosition/","link":"","permalink":"http://zehai.info/2020/01/03/2020-01-03-SearchInsertPosition/","excerpt":"","text":"LeetCode38Easy Given a sorted array and a target value, return the index if the target is found. If not, return the index where it would be if it were inserted in order. You may assume no duplicates in the array. Example 1: 12Input: [1,3,5,6], 5Output: 2 Example 2: 12Input: [1,3,5,6], 2Output: 1 Example 3: 12Input: [1,3,5,6], 7Output: 4 Example 4: 12Input: [1,3,5,6], 0Output: 0 离职后的第一题想先简单点热个身(后面有个难的目前还没做出来),就是说给一个target,返回它在数组中的位置 How该题目一上脑子就可以写下如下的代码 12345678910111213141516public int searchInsert(int[] nums, int target) { if (nums == null || nums.length == 0) { return 0; } if (target > nums[nums.length - 1]) { return nums.length; } int pos =1; for(int i =0;i<nums.length-1;i++){ if(nums[i]<target && nums[i+1]>=target){ pos = ++i; break; } } return pos;} 但转念一想,题目中给定的是一个sorted array这是一个优化的切口,可以将O(n)的复杂度降低到O(logn),通过递归来拆解完成这道题 12345678910111213141516171819private int searchInsert(int[] nums, int target, int low, int high) { int mid = (low+high)/2; if (target < nums[mid]) { if (mid == 0 || target > nums[mid-1]) { return mid; } return searchInsert(nums, target, low, mid-1); } if (target > nums[mid]) { if (mid == nums.length-1 || target < nums[mid+1]) { return mid+1; } return searchInsert(nums, target, mid+1, high); } return mid; }","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"easy","slug":"easy","permalink":"http://zehai.info/tags/easy/"}]},{"title":"SpringBoot概要","slug":"2019-12-22-SpringBoot概要","date":"2019-12-16T12:37:30.000Z","updated":"2021-08-16T07:58:26.025Z","comments":true,"path":"2019/12/16/2019-12-22-SpringBoot概要/","link":"","permalink":"http://zehai.info/2019/12/16/2019-12-22-SpringBoot%E6%A6%82%E8%A6%81/","excerpt":"","text":"含义:spring 的简化配置版本(继承父类依赖,拥有父类的所有配置) 123456789101112131415<!--你的项目pom文件--><parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.4.RELEASE</version> <relativePath/> <!-- lookup parent from repository --></parent><!--点开spring-boot-starter-parent,文件相对位置\\org\\springframework\\boot\\spring-boot-starter-parent\\2.0.4.RELEASE--><parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>2.0.4.RELEASE</version> <relativePath>../../spring-boot-dependencies</relativePath></parent> 微服务 AOP 简化部署,可以再pom.xml中配置plugins来实现导出jar包,方便执行 Features: starter 入口类标记@SpringBootApplication SpringBoot配置类@SpringBootConfiguration 配置类@Configuration 开启自动配置@EnableAutoConfiguration 自动配置包@AutoConfigurationPackage 导入组件@Import 疑惑 为什么使用注解 为什么需要AOP 为什么选择springboot","raw":null,"content":null,"categories":[{"name":"SpringBoot","slug":"SpringBoot","permalink":"http://zehai.info/categories/SpringBoot/"}],"tags":[{"name":"introduction","slug":"introduction","permalink":"http://zehai.info/tags/introduction/"}]},{"title":"2019-12-22-zookeeper概要","slug":"2019-12-22-zookeeper概要","date":"2019-12-16T12:37:30.000Z","updated":"2021-07-27T07:09:41.885Z","comments":true,"path":"2019/12/16/2019-12-22-zookeeper概要/","link":"","permalink":"http://zehai.info/2019/12/16/2019-12-22-zookeeper%E6%A6%82%E8%A6%81/","excerpt":"","text":"含义:动物管理员,管理节点 作用:开源的分布式应用程序协调服务(简单来说,就是一个抽象出来,专门管理各个服务的管理员,发现服务,注册服务,以实现分布式应用的联合工作) feature 树状目录结构,节点称作znode 持久节点(客户端断开仍然存在) 临时节点(断开消失) 节点监听(通过get exists,getchildren来实行监听) 应用: 分布式锁 描述 问题场景 我们有一个服务C,将A系统的订单数据,发送到B系统进行财务处理,但这个服务部C署了三个服务器来进行并发,其中有些数据在传送处理时会new一个objectid,如果不添加锁,该数据可能被两个服务同时调起,在B服务中生成两条记录 解决方案 我们同步数据时候,需要给同一个数据加锁,防止该数据同时被两个服务调起,服务访问某条订单数据时候,需要先获得锁,操作完后释放锁 实现方式 每个服务连接一个znode的下属有序临时节点,并监听上个节点的变化,编号最小的临时节点获得锁,操作资源,来实现 服务注册和发现 问题场景 我们同步数据的服务C(上个表格中描述),可能是部署在一个机器上的多进程,也可能是部署在多个物理ip上的服务,他是动态变化的,如果没有zookeeper类的软件,可能我每改一次ip,都需要重启一下服务,服务宕机了,也要改ip(不然404) 解决方案 我们需要有个服务来管理应用状态,知道服务的运行状态,这样,当其他服务调起这个服务的时候,才能通过zookeeper提供的地址进行同行 实现方式 服务启动会注册到zookeeper,并保持心跳,其他服务想要调用某服务的时候,询问zookeeper拿到地址,然后发送请求报文(例如RPC) 1.每个应用创建一个持久节点,每个服务在持久节点下建立临时节点,不同应用间会有监听,A服务如果变动,B服务会收到订阅","raw":null,"content":null,"categories":[{"name":"zookeeper","slug":"zookeeper","permalink":"http://zehai.info/categories/zookeeper/"}],"tags":[{"name":"introduction","slug":"introduction","permalink":"http://zehai.info/tags/introduction/"}]},{"title":"2019-12-15-@SpringBootApplication","slug":"2019-12-15-SpringBootApplication","date":"2019-12-15T15:13:51.000Z","updated":"2021-07-27T07:09:41.885Z","comments":true,"path":"2019/12/15/2019-12-15-SpringBootApplication/","link":"","permalink":"http://zehai.info/2019/12/15/2019-12-15-SpringBootApplication/","excerpt":"","text":"启动类我们可以见到最简单的springboot的application.java文件如下123456@SpringBootApplicationpublic class SpringTestApplication { public static void main(String[] args) { SpringApplication.run(SpringTestApplication.class, args); } 实际上,SpringApplication的run方法时首先会创建一个SpringApplication类的对象,利用构造方法创建SpringApplication对象时会调用initialize方法 1234567891011public static ConfigurableApplicationContext run(Object source, String... args) { return run(new Object[] { source }, args); } public static ConfigurableApplicationContext run(Object[] sources, String[] args) { return new SpringApplication(sources).run(args); } public SpringApplication(Object... sources) { initialize(sources); } 其中initialize方法如下 1234567891011121314151617`private void initialize(Object[] sources) { // 在sources不为空时,保存配置类 if (sources != null && sources.length > 0) { this.sources.addAll(Arrays.asList(sources)); } // 判断是否为web应用 this.webEnvironment = deduceWebEnvironment(); // 获取并保存容器初始化类,通常在web应用容器初始化使用 // 利用loadFactoryNames方法从路径MEAT-INF/spring.factories中找到所有的ApplicationContextInitializer setInitializers((Collection) getSpringFactoriesInstances( ApplicationContextInitializer.class)); // 获取并保存监听器 // 利用loadFactoryNames方法从路径MEAT-INF/spring.factories中找到所有的ApplicationListener setListeners((Collection) getSpringFactoriesInstances(ApplicationListener.class)); // 从堆栈信息获取包含main方法的主配置类 this.mainApplicationClass = deduceMainApplicationClass();} 实例化后调用run: 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051public ConfigurableApplicationContext run(String... args) { StopWatch stopWatch = new StopWatch(); stopWatch.start(); ConfigurableApplicationContext context = null; FailureAnalyzers analyzers = null; // 配置属性 configureHeadlessProperty(); // 获取监听器 // 利用loadFactoryNames方法从路径MEAT-INF/spring.factories中找到所有的SpringApplicationRunListener SpringApplicationRunListeners listeners = getRunListeners(args); // 启动监听 // 调用每个SpringApplicationRunListener的starting方法 listeners.starting(); try { // 将参数封装到ApplicationArguments对象中 ApplicationArguments applicationArguments = new DefaultApplicationArguments( args); // 准备环境 // 触发监听事件——调用每个SpringApplicationRunListener的environmentPrepared方法 ConfigurableEnvironment environment = prepareEnvironment(listeners, applicationArguments); // 从环境中取出Banner并打印 Banner printedBanner = printBanner(environment); // 依据是否为web环境创建web容器或者普通的IOC容器 context = createApplicationContext(); analyzers = new FailureAnalyzers(context); // 准备上下文 // 1.将environment保存到容器中 // 2.触发监听事件——调用每个SpringApplicationRunListeners的contextPrepared方法 // 3.调用ConfigurableListableBeanFactory的registerSingleton方法向容器中注入applicationArguments与printedBanner // 4.触发监听事件——调用每个SpringApplicationRunListeners的contextLoaded方法 prepareContext(context, environment, listeners, applicationArguments, printedBanner); // 刷新容器,完成组件的扫描,创建,加载等 refreshContext(context); afterRefresh(context, applicationArguments); // 触发监听事件——调用每个SpringApplicationRunListener的finished方法 listeners.finished(context, null); stopWatch.stop(); if (this.logStartupInfo) { new StartupInfoLogger(this.mainApplicationClass) .logStarted(getApplicationLog(), stopWatch); } // 返回容器 return context; } catch (Throwable ex) { handleRunFailure(context, listeners, analyzers, ex); throw new IllegalStateException(ex); }} 为了建立调用逻辑画了一张图,比较粗糙 总结SpringApplication.run一共做了两件事 创建SpringApplication对象;在对象初始化时保存事件监听器,容器初始化类以及判断是否为web应用,保存包含main方法的主配置类。 调用run方法;准备spring的上下文,完成容器的初始化,创建,加载等。会在不同的时机触发监听器的不同事件 https://www.cnblogs.com/davidwang456/p/5846513.html","raw":null,"content":null,"categories":[{"name":"SpringBoot","slug":"SpringBoot","permalink":"http://zehai.info/categories/SpringBoot/"}],"tags":[{"name":"annotation","slug":"annotation","permalink":"http://zehai.info/tags/annotation/"}]},{"title":"2019-12-14-分布式系统","slug":"2019-12-14-分布式系统","date":"2019-12-14T07:41:03.000Z","updated":"2021-07-27T07:09:41.885Z","comments":true,"path":"2019/12/14/2019-12-14-分布式系统/","link":"","permalink":"http://zehai.info/2019/12/14/2019-12-14-%E5%88%86%E5%B8%83%E5%BC%8F%E7%B3%BB%E7%BB%9F/","excerpt":"","text":"[TOC] 分布式锁原因:目的: 数据库唯一索引redis 的SETNXredis的RedLock分布式事务CAPBASEPaxosRaft","raw":null,"content":null,"categories":[{"name":"Distribution","slug":"Distribution","permalink":"http://zehai.info/categories/Distribution/"}],"tags":[{"name":"KnowageTree","slug":"KnowageTree","permalink":"http://zehai.info/tags/KnowageTree/"}]},{"title":"2019-10-27-DynamicProgramming动态规划","slug":"2019-10-27-DynamicProgramming动态规划","date":"2019-10-27T04:01:05.000Z","updated":"2021-07-27T07:09:41.884Z","comments":true,"path":"2019/10/27/2019-10-27-DynamicProgramming动态规划/","link":"","permalink":"http://zehai.info/2019/10/27/2019-10-27-DynamicProgramming%E5%8A%A8%E6%80%81%E8%A7%84%E5%88%92/","excerpt":"","text":"what动态规划是通过组合子问题的解里求解原问题,一般被用来求最优化问题 1.刻画一个最优解的结构特征 2.递归定义最优解的值 3.计算最优解 4.计算的信息构造最优解","raw":null,"content":null,"categories":[{"name":"Algorithms","slug":"Algorithms","permalink":"http://zehai.info/categories/Algorithms/"}],"tags":[{"name":"DynamicProgramming","slug":"DynamicProgramming","permalink":"http://zehai.info/tags/DynamicProgramming/"}]},{"title":"2019-09-21-中台是什么","slug":"2019-09-21-中台是什么","date":"2019-09-21T01:54:06.000Z","updated":"2021-07-27T07:09:41.884Z","comments":true,"path":"2019/09/21/2019-09-21-中台是什么/","link":"","permalink":"http://zehai.info/2019/09/21/2019-09-21-%E4%B8%AD%E5%8F%B0%E6%98%AF%E4%BB%80%E4%B9%88/","excerpt":"","text":"why公司最近上了一套中台服务,因为好奇所以查了一下资料,中台是为了提高开发效率,将各个服务中共同的组织,资源集中管理,作为一个整体服务,宏观上我们可以把淘宝客户端,盒马生鲜,饿了么看做大前端,而他们有一部分共享数据,比如用户信息,支付功能,搜索功能等 又比如我们公司的电商平台,核心系统包括,ERP(企业资源计划即 ERP Enterprise Resource Planning),WMS(仓库管理系统Warehouse Management System)以及一套交付系统(包含购买,安装服务,维修服务,代理商管理等),他们需要共享商品信息,ERP需要用来算账,WMS需要用来发货,交付系统需要用来记录他的生命周期,就在中台配置一套信息,就可以达到三套系统都可以访问的效果。 what中台也可以分类: 业务中台(如上举例我们公司的业务) 技术中台(如淘宝的中台,当然也有偏业务的部分,主要目的防止重复造轮子) 数据中台(包括建模,日志分析,profile) 算法中台(推荐算法,搜索算法等) feature目前中台还是比较烧钱的吧,公司没有到达一定的规模,这个东西还是没有什么卵用,我们目前上了一套ERP,一套中台,级别在千万吧,还需要各个部门进行配合,进行系统整合(以前都是各干各的,系统间几乎没有交互,重复造轮子)。恶心的我啊,加了10117了三个月才大体上能用了 不过我觉得中台的发展历史可能和服务一样,一个整体的服务臃肿,后续的中台还是会变成中心化,即一个核心业务,其他做成微服务,分布式的架构,是目前技术潮流的前进方向","raw":null,"content":null,"categories":[{"name":"Java","slug":"Java","permalink":"http://zehai.info/categories/Java/"}],"tags":[]},{"title":"2019-09-14Node日志感受","slug":"2019-09-14-Node日志感受","date":"2019-09-14T12:33:37.000Z","updated":"2021-07-27T07:09:41.884Z","comments":true,"path":"2019/09/14/2019-09-14-Node日志感受/","link":"","permalink":"http://zehai.info/2019/09/14/2019-09-14-Node%E6%97%A5%E5%BF%97%E6%84%9F%E5%8F%97/","excerpt":"","text":"why日志是用来记录程序运行重要的工具 记录请求日志,关键节点打上日志,可以追踪问题(生产) 方便调试,定位故障 监控应用的运行状态 what(egg.js为例)日志分为: appLogger应用日志,也是我们自定义的日志 coreLogger核心框架,插件日志 errorLogger agentLogger用于监控agent日志 日志级别: ctx.logger.debug() ctx.logger.info() ctx.logger.warn() ctx.logger.error() 以appLogger为例,一共4*4种 日志编码: 默认utf-8 feature目前日志都支持切割,每天一个文件,以.log.2019-09-14为尾缀(小时切割和文件大小切割实用性不高),编写日志的时候我们也需要注意如下几点: 在关键请求关键位置打好日志 打印日志注明这是哪个文件哪个方法处理的日志 logger.debug(`>>>> Entering yourMethod(month = ${month}, count= ${count}\"); //通过日志 >>>> 和 <<<< 将给出函数输入和退出的信息 日志不能太多,一个是查问题日志太多,第二个是对硬盘写入日志也有一定性能影响(egg是写入内存,每秒保存一次硬盘) 合理使用try-catch来进行日志输出 日志写法一定要避免简洁,不要日志再抛错(正常打印参数,打印处理结果) 日志不能具备除了日志以外的功能 正确把握日志级别,info记录信息(最主要的),debug显示调试信息,warn显示警告,error保存数据库请求类型的报错 尽量使用ctx.logger而并非console.log,后者将会把所有日志打印在stdout中,无法关闭或打开调试信息,并且不区分级别","raw":null,"content":null,"categories":[{"name":"Node","slug":"Node","permalink":"http://zehai.info/categories/Node/"}],"tags":[{"name":"logs","slug":"logs","permalink":"http://zehai.info/tags/logs/"}]},{"title":"绍兴游记","slug":"2019-05-05-绍兴游记","date":"2019-05-06T06:09:05.000Z","updated":"2021-07-27T07:09:41.884Z","comments":true,"path":"2019/05/06/2019-05-05-绍兴游记/","link":"","permalink":"http://zehai.info/2019/05/06/2019-05-05-%E7%BB%8D%E5%85%B4%E6%B8%B8%E8%AE%B0/","excerpt":"","text":"5月1日搭车去了绍兴,一个是自己毕业后其实既没有毕业旅行,也没有去哪里玩儿,所以想补偿自己一下,第二个是我表姐给我买了票了,想着还是去吧。 因为最后一个工作日加了个班,然后又起得很早,读着东野圭吾的《嫌疑人X的献身》,中午饿了就在高铁上买了15元的盒饭,拿着kindle强行盖了会儿,,锁屏突然推送了广告,这几个字,读了好几遍,好是喜欢。耳机刚好播放到最近很是喜欢的《你的酒馆对我打了样》,我调整了椅子,时速307km逃离着这座有你的城市。 杭州高楼鳞次栉比,穿过一栋栋高楼就来到了绍兴,这个城市不是很繁华,倒也是一个保留的很好的江南古镇,我很喜欢这里,火车站,背着包,司机师傅操着一口流畅的普通话礼貌的问着我去哪儿,一边介绍着风景名胜,满满的都是对这个城市的热爱,有风景名胜兰亭,壮阔的东湖,一个慢节奏的小城市,除却了对金钱的渴望,连揽客都变得那么悠闲。 吃过饭,和侄子一起去了仓桥直街,其实可以理解为低配版的南锣鼓巷,人不算特别多吧,但是风景却很好的保留了江南的风味,一轮明月(非p30pro),以及灯光的烘托,让江南的夜晚,似乎比白天更加的夺目。陈旧的街巷保留了最初的最原始的石板街,街边的店家还是很古旧的撑着旗帜,还是过去那种一个很大的门(2m高*7个木板)还有很多过去的宣传标语,当然也有很多小吃。 第二天的行程主要就是鲁迅故居了,其实并没有什么让我眼前一亮的地方,因为这里的人实在太多了,我早上九点半抵达景点,到11点才排队进入了鲁迅祖居,倒也很是沮丧,而且祖居里其实并没有什么值得参考的,游人们看长安花一样,参观者一个一个的房间。然后我又排队了40分钟进入了百草园,想一看鲁迅童年最快乐的地方,但却也什么也没有看到,一个不是很好看的花园,料理的和我爷爷的菜地一样,不过或许树人童年就是在这么一块地方进行玩耍的,很多游人围着百草园的一块大石头上合影留恋,排着队,各种姿势摆拍,令我觉得很是不舒服(我也没有拍到)。 倒也怎么看,鲁迅的童年应该也很是无聊,强行找着自己的乐子吧,后来排了30分钟的队伍去了三味书屋,其实我当时的心情是抗拒的,但还是忍着烈日,走上了不归的队伍,书屋的景点其实很小,一个小的教室,两边是家长的坐席,中间是学生的座位,图片可以看到鲁迅其实是坐在讲台左边的,看来他小时候也是个先生特别关照的对象鸭。 其实每逛完一个景点都是非常长的商业街,路的两边充斥着特产豆腐,黄酒产品,虽然我不是很反感这种景点恰饭情景,但是满街飘着臭豆腐的味道,回荡在鲁迅故居的上空,但多少也是有点违和,第二天的行程是安昌古镇,其实也没有什么特别的。 历史的前轮碾压而过,很多东西都因为商业化而丢失了曾经的自己,不过总体来讲我还是比较喜欢吃过午饭,在江南的水边走着,遇到一位94岁的奶奶晒太阳,打了个招呼,他居住在这里三十年了,每次节假日,这里都会来很多人,之前就在这河里洗衣服,打水,后来腿脚不方便了,就搬把凳子坐在这里,听着繁华的声音,晒着太阳,看着船夫送走一个又一个人,这里的瓦年龄都很大,之前屋子的瓦还坏过一个,他折腾了好久才暂时不滴水了,她涛涛不觉的讲着,沉醉在这个小镇带给他的快乐和烦恼中 两天的行程不是很满,不是很累,也不是很轻松(到哪儿,哪儿都排队),回去没有抢到票,从绍兴一直站着回了北京,小说确实也没有读的下去,我看着窗外的风景,思念着一个人,认识这么久,我还没和你一起旅游过","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2019-04-24-Nodejs12","slug":"2019-04-24-Nodejs12","date":"2019-04-24T15:13:10.000Z","updated":"2021-07-27T07:09:41.883Z","comments":true,"path":"2019/04/24/2019-04-24-Nodejs12/","link":"","permalink":"http://zehai.info/2019/04/24/2019-04-24-Nodejs12/","excerpt":"","text":"Introducing Node.js 12raw article Apr 24 This blog was written by Bethany Griggs and Michael Dawson, with additional contributions from the Node.js Release Team and Technical Steering committee. We are excited to announce Node.js 12 today. Highlighted updates and features include faster startup and better default heap limits, updates to V8, TLS, llhttp, new features including diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API and ES6 module support and more. The Node.js 12 release replaces version 11 in our current release line. The Node.js release line will become a Node.js Long Term Support (LTS) release in Oct 2019 (more details on LTS strategy here). V8 Gets an Upgrade: V8 update to V8 7.4As always a new version of the V8 JavaScript engine brings performance tweaks and improvements as well as keeping Node.js up with the ongoing improvements in the language and runtime. Highlights include: Async stack traces: https://v8.dev/blog/v8-release-72#async-stack-traces Faster calls with arguments mismatch: https://v8.dev/blog/v8-release-74#faster-calls-with-arguments-mismatch Faster await: https://v8.dev/blog/v8-release-73#faster-await Faster javascript parsing: https://v8.dev/blog/v8-release-72#javascript-parsing Read more about V8 at their official blog. Hello TLS 1.3 Node.js 12 is introducing TLS1.3 support and making it the default max protocol, while also supporting CLI/NODE_OPTIONS switches to disable it if necessary. TLS1.3 is a major update to the TLS protocol, with many security enhancements and should be used over TLS1.2 whenever possible. TLS1.3 is different enough that even though the OpenSSL APIs are technically API/ABI compatible when TLS1.3 is negotiated, changes in the timing of protocol records and of callbacks broke assumptions hard-coded into the ‘tls’ module. This change introduces no API incompatibilities when TLS1.2 is negotiated. It is the intention that it be backported to current and LTS release lines with the default maximum TLS protocol reset to ‘TLSv1.2’. This will allow users of those lines to explicitly enable TLS1.3 if they want. If you want to read more you can check out these related articles:https://developer.ibm.com/blogs/openssl-111-has-landed-in-nodejs-master-and-why-its-important-for-nodejs-lts-releases/, https://developer.ibm.com/blogs/tls13-is-coming-to-nodejs/ Properly configure default heap limitsThis update will configure the JavaScript heap size based on available memory instead of using defaults that were set by V8 for use with browsers. In previous releases, unless configured, V8 defaulted to limiting the max heap size to 700 MB or 1400MB on 32 and 64-bit platforms respectively. Configuring the heap size based on available memory ensures that Node.js does not try to use more memory than is available and terminating when its memory is exhausted. This is particularly useful when processing large data-sets. As before, it will still be possible to set — max-old-space-size to use a different limit if the default is not appropriate for your application. Switch default http parser to llhttpNode.js 12 will also switch the default parser to llhttp. This will be beneficial in that it will make testing and comparing the new llhttp-based implementation easier. First introduced as llhttp experimental in v11.2.0, llhttp will be taken out of experimental in this release. Making Native Modules Easier — progress continuesNode.js 12 continues the trend of making building and supporting native modules easier. Changes include better support for native modules in combination with Worker threads, as well as N-API (https://nodejs.org/api/n-api.html#n_api_n_api) version 4 (which has also been backported to 8.x and 10.x) which makes it easier to use your own threads for native asynchronous functions. You can read more about this and how you can leverage it in your modules in this great article here: https://medium.com/the-node-js-collection/new-features-bring-native-add-ons-close-to-being-on-par-with-js-modules-cd4f9b8e4b4 Worker ThreadsWorker Threads (https://nodejs.org/api/worker_threads.html), while not new in this release, are still seeing progress. The use of Workers Threads no longer requires the use of a flag and they are progressing well towards moving out of experimental. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. We’d like you to try them out and let us know what use cases you have where they are helpful. For a quick introduction check out this great article: https://medium.com/@Trott/using-worker-threads-in-node-js-80494136dbb6. Diagnostic ReportsNode.js 12 brings with it a new experimental feature “Diagnostic report.” This allows you to generate a report on demand or when certain events occur. This report contains information that can be useful to help diagnose problems in production including crashes, slow performance, memory leaks, high CPU usage, unexpected errors and more. You can read more about it in this great article: https://medium.com/the-node-js-collection/easily-identify-problems-in-node-js-applications-with-diagnostic-report-dc82370d8029. Heap DumpsIf you ever needed to generate heap dumps in order to investigate memory issues but were slowed down by having to install a new module into production, the good news is that Node.js 12 brings integrated heap dump capability out of the box. You can check out the documentation in https://github.com/nodejs/node/pull/27133 and https://github.com/nodejs/node/pull/26501 to learn more. Startup ImprovementsIn Node.js 11 we shipped built-in code cache support in workers — when loading built-in libraries written in JavaScript, if the library was previously compiled on the main thread, the worker thread no longer needs to compile it from scratch but can reuse the v8 code cache generated by the main thread to speed up compilation. Similarly, the main thread can reuse the cache generated by workers. This gave a roughly 60% speedup for the startup of workers. Now in Node.js 12 we generate the code cache for built-in libraries in advance at build time, and embed it in the binary, so in the final release, the main thread can use the code cache to start up the initial load of any built-in library written in JavaScript. This gives a ~30% speedup in startup time for the main thread. ES6 Module SupportNode.js 12 brings an updated experimental version of support for ES6 modules. It is an important step toward a supported implementation and we’d like you to try it out and give us feedback. For more details check out this great blog post. New compiler and platform minimumsNode.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 12, the codebase now requires a minimum of GCC 6 and glibc 2.17 on platforms other than macOS and Windows. Binaries released at Node.js org use this new toolchain minimum and therefore include new compile-time performance and security enhancements. The increment in minimum compiler and libc requirements also increments minimums in supported platforms. Platforms using glibc (most platforms other than macOS and Windows) must now include a minimum version of 2.17. Common Linux platforms compatible with this version include Enterprise Linux 7 (RHEL and CentOS), Debian 8 and Ubuntu 14.04. Binaries available from nodejs.org will be compatible with these systems. Users needing to compile their own binaries on systems not natively supporting GCC 6 may need to use a custom toolchain. Even though Node.js 12.0.0 may compile with older compilers, expect the Node.js 12 codebase (including V8) to rapidly adopt C++ features supported by GCC 6 during the pre-LTS timeframe. Windows minimums remain the same as Node.js 11, requiring at least Windows 7, 2008 R2 or 2012 R2 and a minimum compiler of Visual Studio 2017. macOS users needing to compile Node.js will require a minimum of Xcode 8 and Node.js binaries made available on nodejs.org will only support a minimum of macOS 10.10 “Yosemite”. Further details are available in the Node.js BUILDING.md. Thank you!A big thank you to everyone who made this release come together, whether you submitted a pull request, helped with our benchmarking efforts, or you were in charge of one of the release versions. We’d also like to thank the Node.js Build Working Group for ensuring we have the infrastructure to create and test releases. The release manager for Node.js 12 is Bethany Griggs. For a full list of the release team members head here. You can read more about the complete list of features here. If you are interested in contributing to Node.js, we welcome you. Learn more via our contributor guidelines.","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2019-04-20-rentingHouse","slug":"2019-04-20-rentingHouse","date":"2019-04-20T15:23:38.000Z","updated":"2021-07-27T07:09:41.883Z","comments":true,"path":"2019/04/20/2019-04-20-rentingHouse/","link":"","permalink":"http://zehai.info/2019/04/20/2019-04-20-rentingHouse/","excerpt":"","text":"快要毕业了,朋友圈里洋溢着,毕业的快乐,直系学弟们也返校进行了毕业论文的最终答辩,也希望他们都取得一个好的成绩,能在回首大学四年时候,不因为碌碌无为而后悔,能够在社一中,找到一份合适的工作,并感谢曾经那个在大学奋斗的自己。 毕业季第一道坎就是租房(家里有矿的,这篇文章你就可以关掉了),总体来说,在京就业,房租确实很贵的,不过对于计算机专业来说,应该还是可以的。我们熟知的计算机区域 望京SOHO(小企业居多) 中关村 中关村软件园(大厂) 对应的租房地点可以选择: 孙河(就可能地铁站远一点) 上地附近 回龙观,朱辛庄 主要平台(按推荐顺序): 自如(个人选择项,应届生有特权) 豆瓣小组 闲鱼 相如>蛋壳=贝壳 应届生可能囊中羞涩,所以建议选择自如,分期月付(应届免押金,分起费120附近),不过计算机专业的应届生薪资理论上是>=7k,所以我觉得应该马马虎虎可以生存下来了。之所以不推荐其他的中介,是因为你可能租房后,对于维修,舍友抽烟,养的宠物半夜狂叫,又退不了租,陷入麻烦中。(自如麻烦来结一下广告费) 另外整理一下招聘的软件(按推荐顺序): (个人软件工程,仅供参考) BOSS 拉钩 智联招聘 脉脉 希望这些资料对刚毕业的你有所帮助,其余想起来的,再直接更新","raw":null,"content":null,"categories":[],"tags":[]},{"title":"2019-04-17-日记","slug":"2019-04-17-日记","date":"2019-04-17T11:11:13.000Z","updated":"2021-07-27T07:09:41.883Z","comments":true,"path":"2019/04/17/2019-04-17-日记/","link":"","permalink":"http://zehai.info/2019/04/17/2019-04-17-%E6%97%A5%E8%AE%B0/","excerpt":"","text":"经历了连续9*13小时的工作后,我终于得到了一天的调休计划,昨晚十一点半打车从五棵松到家 洗了个热水澡,关了手机闹铃,打开了Alexa的环境噪音,难的踏实的进入了梦中。 但是!! 我Alexa的闹钟忘记关了,七点被吵醒后一直没有睡着,所以起床热了杯牛奶,弄了张煎饼,涂了点番茄酱就凑合吃了,后来外出和朋友聊了会儿天,倒确实点出了一些目前存在的问题 一个好的技术不仅要知其然,更要知其所以然,多挖掘他背后的源码,去思考如何实现,这样才能在高并发时,将200ms优化到100ms,才是一个高级程序员应该具备的素质之一 Node学习分为三年,第一年知其语法,会写应用,第二年知其框架,高级开发,第三年,读其源码,知其原理 多用语言去写一些工具类,多去学习和参考优质轮子,而不是写一些玩具,别人都写烂的东西 (重要的应该就这么多了) 朋友的话很对,我也进行了思考,自己在JS的道路上,摸着黑走路,对于源码其实要读,但是之前打开看过一眼就一脸懵逼的状态,所以还是需要有时间学习一下优质的GitHub,撕开一个口子,然后进入到正轨,自己去多写一些方法区调用,然后一点点去琢磨,他的实现过程。 4月底的计划就是 尽量换一份工作,受不了8117,薪资还不如麦当劳的临时工 自如租约到期了,搬家到朱辛庄或者霍营 没换工作的话,买一本书通勤看会儿,换工作的话,抽个零碎的时间读,顺便整理笔记,更博(暂定这个月读一下v8的gc) 五一出去旅游,暂时想去杭州看看 买点竹筒,想做竹筒饭","raw":null,"content":null,"categories":[{"name":"life","slug":"life","permalink":"http://zehai.info/categories/life/"}],"tags":[{"name":"diary","slug":"diary","permalink":"http://zehai.info/tags/diary/"}]},{"title":"暂停更新通告[作废]","slug":"2019-04-13-暂停更新通告","date":"2019-04-13T14:03:36.000Z","updated":"2021-07-27T07:09:41.883Z","comments":true,"path":"2019/04/13/2019-04-13-暂停更新通告/","link":"","permalink":"http://zehai.info/2019/04/13/2019-04-13-%E6%9A%82%E5%81%9C%E6%9B%B4%E6%96%B0%E9%80%9A%E5%91%8A/","excerpt":"","text":"自今日起,博客开始停更 996.icu年后开始,互联网似乎过得都不好,从七陌被裁(也有个人原因吧),到被航天二院,知网,中电科因为学历卡住入职(BOSS直聘,面试完了,技术找人事审核不通过),后来遇到了一系列傲慢的中科软系列面试,无限加班的创业公司,还有那种以培训机构为目标招人的小公司。最终未能收获一个满意的offer,最终舔狗选择了一家说是不加班的某所,然,现在才发现,实在太忙,包括现在也刚刚到家,工作也没有pc,没有网络,所以也很不方便随时学习,可能有一些手写笔记,但经历有限,所以最近会停止更新 年后996冲上了榜首,让世界都在反思为什么中国的加班为什么如此疯狂,但话题热度很快下降,因为没有人会去放下手中的工作去抵制,毕竟生活总要继续下去 生活总是这样,起起落落落 努力不一定有回报,但不努力一定很(mei)舒(hui)服(bao) 晚安~hexo","raw":null,"content":null,"categories":[],"tags":[]},{"title":"PermutationSequence-60","slug":"2019-04-09-PermutationSequence","date":"2019-04-09T13:09:28.000Z","updated":"2021-07-27T07:09:41.882Z","comments":true,"path":"2019/04/09/2019-04-09-PermutationSequence/","link":"","permalink":"http://zehai.info/2019/04/09/2019-04-09-PermutationSequence/","excerpt":"","text":"ProblemThe set [1,2,3,...,*n*] contains a total of n! unique permutations. By listing and labeling all of the permutations in order, we get the following sequence for n = 3: "123" "132" "213" "231" "312" "321" Given n and k, return the kth permutation sequence. Note: Given n will be between 1 and 9 inclusive. Given k will be between 1 and n! inclusive. Example 1: 12Input: n = 3, k = 3Output: "213" Example 2: 12Input: n = 4, k = 9Output: "2314" keysolutionperfect","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"https与http","slug":"2019-04-06-https与http","date":"2019-04-06T14:41:27.000Z","updated":"2021-07-27T07:09:41.882Z","comments":true,"path":"2019/04/06/2019-04-06-https与http/","link":"","permalink":"http://zehai.info/2019/04/06/2019-04-06-https%E4%B8%8Ehttp/","excerpt":"","text":"whatadvantage客户端在使用HTTPS方式与Web服务器通信时有以下几个步骤,如图所示。 (1)客户使用https的URL访问Web服务器,要求与Web服务器建立SSL连接。 (2)Web服务器收到客户端请求后,会将网站的证书信息(证书中包含公钥)传送一份给客户端。 (3)客户端的浏览器与Web服务器开始协商SSL连接的安全等级,也就是信息加密的等级。 (4)客户端的浏览器根据双方同意的安全等级,建立会话密钥,然后利用网站的公钥将会话密钥加密,并传送给网站。 (5)Web服务器利用自己的私钥解密出会话密钥。 (6)Web服务器利用会话密钥加密与客户端之间的通信。","raw":null,"content":null,"categories":[],"tags":[]},{"title":"SpiralMatrix2-59","slug":"2019-04-06-SpiralMatrix2","date":"2019-04-06T08:16:18.000Z","updated":"2021-07-27T07:09:41.882Z","comments":true,"path":"2019/04/06/2019-04-06-SpiralMatrix2/","link":"","permalink":"http://zehai.info/2019/04/06/2019-04-06-SpiralMatrix2/","excerpt":"","text":"problem Given a positive integer n, generate a square matrix filled with elements from 1 to n2 in spiral order. Example: 1234567>Input: 3>Output:>[[ 1, 2, 3 ],[ 8, 9, 4 ],[ 7, 6, 5 ]>] key虽然标了medium,但是确实很简单,形成一个口字型闭环,一层层去处理就好了,然后再主要就是控制口字循环时候的边界,以及最后一个元素的判断 solution12345678910111213141516171819202122232425262728public int[][] generateMatrix(int n) { int [][]res = new int[n][n]; int left = 0; int right = n-1; int top = 0; int bottom = n-1; int index = 1; int quit = n*n; while(index<=quit){ for(int i=left;i<=right;i++) { res[top][i] = (index++); } top++; for(int i=top;i<=bottom;i++){ res[i][right] = (index++); } right--; for(int i=right;i>=left;i--) { res[bottom][i]=(index++); } bottom--; for(int i=bottom;i>=top;i--) { res[i][left]=(index++); } left++; } return res; } perfectno","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"LengthofLastWord","slug":"2019-04-06-LengthofLastWord","date":"2019-04-06T07:50:25.000Z","updated":"2021-07-27T07:09:41.882Z","comments":true,"path":"2019/04/06/2019-04-06-LengthofLastWord/","link":"","permalink":"http://zehai.info/2019/04/06/2019-04-06-LengthofLastWord/","excerpt":"","text":"problem Given a string s consists of upper/lower-case alphabets and empty space characters ' ', return the length of last word in the string. If the last word does not exist, return 0. Note: A word is defined as a character sequence consists of non-space characters only. Example: 12Input: "Hello World"Output: 5 key该方法调用了java的String.split(regex)所以在复杂度上回很高,大概仅仅beat了6%的玩家,但解决很快,正确的算法思维就倒序遍历,最后开始查往前,最后一个非空格查到空格结束 solution123456789101112//7mspublic int lengthOfLastWord(String s) { if(s.length()<=0)return 0; String[] tmp = s.split("\\\\s"); int lastIndex = tmp.length-1; if(lastIndex<0) { return 0; }else { return tmp[lastIndex].length(); } } perfect1234567891011121314151617181920212223class Solution { public int lengthOfLastWord(String s) { int n = s.length() - 1; int length = 0; for(int i = n; i >= 0; i--) { if(length == 0) { if(s.charAt(i) == ' ') { continue; }else { length++; } }else { if(s.charAt(i) == ' ') { break; } else { length++; } } } return length; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"easy","slug":"easy","permalink":"http://zehai.info/tags/easy/"}]},{"title":"eventLoop","slug":"2019-04-04-eventLoop","date":"2019-04-04T15:03:46.000Z","updated":"2021-07-27T07:09:41.881Z","comments":true,"path":"2019/04/04/2019-04-04-eventLoop/","link":"","permalink":"http://zehai.info/2019/04/04/2019-04-04-eventLoop/","excerpt":"","text":"whatEvent Loop是一个程序结构,用于等待和发送消息和事件 a programming construct that waits for and dispatches events or messages in a program. 简单说,就是在程序中设置两个线程:一个负责程序本身的运行,称为”主线程”;另一个负责主线程与其他进程(主要是各种I/O操作)的通信,被称为”Event Loop线程”(可以译为”消息线程”)。 由上图可以清楚知道Node的单线程指的是主线程为单线程 异步执行1234567// test.jssetTimeout(() => console.log(1));setImmediate(() => console.log(2));process.nextTick(() => console.log(3));//异步最快Promise.resolve().then(() => console.log(4));(() => console.log(5))();//同步任务最早执行//53412 异步分为两种: 本轮循环:process.nextTick(),Promise 次轮循环:setTimeout(),setInterval,setImmediate 每一次循环中,setTimeout等次轮循环在timers阶段执行,而本轮循环就在check阶段执行,所以会先展示","raw":null,"content":null,"categories":[],"tags":[]},{"title":"春节12响","slug":"2019-04-04-春节12响","date":"2019-04-04T14:15:52.000Z","updated":"2021-07-27T07:09:41.881Z","comments":true,"path":"2019/04/04/2019-04-04-春节12响/","link":"","permalink":"http://zehai.info/2019/04/04/2019-04-04-%E6%98%A5%E8%8A%8212%E5%93%8D/","excerpt":"","text":"1234567891011121314151617181920212223242526272829303132// File: twelve_biubiu.c// Permission: CN-2082-2// Author: Li.YiYi// Dept: PE-362, UG// Origin: TI-352132// 春节十二响 biu biu biu!#env "planet_engine"int init() { set_engine_number_mask(ENGINE_ALL); set_funeral_level(FUNERAL_FULL); // 允许误差10秒以内 if (unix_time() < make_unix_time(2082, 1, 28, 23, 59, 60-10)) return ERR_ENGIN_ENV; return engine_check_init(); // after compile and before real run}int main() { set_curve(CURVE_NATURAL); // 自然曲线耗费燃料最少 for (int i :range(0, 12, 1)) { engine_start(); wait_engine(ENGINE_STATE_CHAGNE); sleep(2000); engin_stop(); wait_engine(ENGINE_STATE_CHAGNE); sleep(4000); // 这个时长在模拟器里听起来更像心跳 } return 0;}int final() { engine_ensure_shutdown();}","raw":null,"content":null,"categories":[{"name":"life","slug":"life","permalink":"http://zehai.info/categories/life/"}],"tags":[]},{"title":"各种Java中锁","slug":"2019-04-04-各种Java中锁","date":"2019-04-04T13:54:58.000Z","updated":"2021-07-27T07:09:41.881Z","comments":true,"path":"2019/04/04/2019-04-04-各种Java中锁/","link":"","permalink":"http://zehai.info/2019/04/04/2019-04-04-%E5%90%84%E7%A7%8DJava%E4%B8%AD%E9%94%81/","excerpt":"","text":"悲观锁:先锁后用每次读数据都悲观认为会被其他操作修改,应用于synchroized , ReentrantLock,因为悲观所以开销大,会阻塞其他线程 乐观锁:先用后判断每次读数据乐观认为没有被其他操作修改,应用于java.util.concurrent.atomic,使用版本号和CAS算法实现 适用于多读的应用类型,提高吞吐量 公平锁:多个线程按申请所顺序取锁无 非公平锁多个线程不按申请顺序取锁,提高吞吐量 可入锁外层使用锁后,内层仍可以使用,而且不会死锁 不可重入锁独享锁共享锁互斥锁 读写锁 分段锁 偏向锁 轻量级锁 重量级锁 自旋锁","raw":null,"content":null,"categories":[{"name":"Java","slug":"Java","permalink":"http://zehai.info/categories/Java/"}],"tags":[{"name":"lock","slug":"lock","permalink":"http://zehai.info/tags/lock/"}]},{"title":"MinStack","slug":"2019-03-24-MinStack","date":"2019-03-24T09:23:55.000Z","updated":"2021-07-27T07:09:41.880Z","comments":true,"path":"2019/03/24/2019-03-24-MinStack/","link":"","permalink":"http://zehai.info/2019/03/24/2019-03-24-MinStack/","excerpt":"","text":"problem Design a stack that supports push, pop, top, and retrieving the minimum element in constant time. push(x) – Push element x onto stack. pop() – Removes the element on top of the stack. top() – Get the top element. getMin() – Retrieve the minimum element in the stack. Example: 12345678MinStack minStack = new MinStack();minStack.push(-2);minStack.push(0);minStack.push(-3);minStack.getMin(); --> Returns -3.minStack.pop();minStack.top(); --> Returns 0.minStack.getMin(); --> Returns -2.","raw":null,"content":null,"categories":[],"tags":[]},{"title":"FindMinimumInRotatedSortedArrayII","slug":"2019-03-23-FindMinimumInRotatedSortedArrayII","date":"2019-03-23T03:56:47.000Z","updated":"2021-07-27T07:09:41.880Z","comments":true,"path":"2019/03/23/2019-03-23-FindMinimumInRotatedSortedArrayII/","link":"","permalink":"http://zehai.info/2019/03/23/2019-03-23-FindMinimumInRotatedSortedArrayII/","excerpt":"","text":"problem Find Minimum in Rotated Sorted Array II Hard Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand. (i.e., [0,1,2,4,5,6,7] might become [4,5,6,7,0,1,2]). Find the minimum element. The array may contain duplicates. Example 1: 12>Input: [1,3,5]>Output: 1 Example 2: 12>Input: [2,2,2,0,1]>Output: 0 Note: This is a follow up problem to Find Minimum in Rotated Sorted Array. Would allow duplicates affect the run-time complexity? How and why? key??? solution12345678910class Solution { public int findMin(int[] nums) { for (int i = 0; i < nums.length - 1; i++) { if (nums[i] > nums[i + 1]) { return nums[i + 1]; } } return nums[0]; }} perfect","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Hard","slug":"Hard","permalink":"http://zehai.info/tags/Hard/"}]},{"title":"FindMinimumInRotatedSortedArray","slug":"2019-03-23-FindMinimumInRotatedSortedArray","date":"2019-03-23T03:53:07.000Z","updated":"2021-07-27T07:09:41.879Z","comments":true,"path":"2019/03/23/2019-03-23-FindMinimumInRotatedSortedArray/","link":"","permalink":"http://zehai.info/2019/03/23/2019-03-23-FindMinimumInRotatedSortedArray/","excerpt":"","text":"problem Find Minimum in Rotated Sorted Array Medium Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand. (i.e., [0,1,2,4,5,6,7] might become [4,5,6,7,0,1,2]). Find the minimum element. You may assume no duplicate exists in the array. Example 1: 12>Input: [3,4,5,1,2] >Output: 1 Example 2: 12>Input: [4,5,6,7,0,1,2]>Output: 0 keysolution12345678public int findMin(int[] nums) { for (int i = 0; i < nums.length - 1; i++) { if (nums[i] > nums[i + 1]) { return nums[i + 1]; } } return nums[0]; } perfect12I'm the perfectbut this problem will harder in the next problem","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"安全防范","slug":"2019-03-23-安全防范","date":"2019-03-23T03:30:43.000Z","updated":"2021-07-27T07:09:41.880Z","comments":true,"path":"2019/03/23/2019-03-23-安全防范/","link":"","permalink":"http://zehai.info/2019/03/23/2019-03-23-%E5%AE%89%E5%85%A8%E9%98%B2%E8%8C%83/","excerpt":"","text":"分类 XSS 攻击:对 Web 页面注入脚本,使用 JavaScript 窃取用户信息,诱导用户操作。 CSRF 攻击:伪造用户请求向网站发起恶意请求。 钓鱼攻击:利用网站的跳转链接或者图片制造钓鱼陷阱。 HTTP参数污染:利用对参数格式验证的不完善,对服务器进行参数注入攻击。 远程代码执行:用户通过浏览器提交执行命令,由于服务器端没有针对执行函数做过滤,导致在没有指定绝对路径的情况下就执行命令。 XSS攻击cross-site-scripting跨域脚本攻击","raw":null,"content":null,"categories":[{"name":"security","slug":"security","permalink":"http://zehai.info/categories/security/"}],"tags":[]},{"title":"mysql事务","slug":"2019-03-23-mysql事务","date":"2019-03-23T02:54:00.000Z","updated":"2021-07-27T07:09:41.880Z","comments":true,"path":"2019/03/23/2019-03-23-mysql事务/","link":"","permalink":"http://zehai.info/2019/03/23/2019-03-23-mysql%E4%BA%8B%E5%8A%A1/","excerpt":"","text":"whatMYSQL事务主要用于保证一串事情要么都成功,要么就回滚,例如付款后,要先写入支付订单表,再个人信息中加入会员权益。这两个操作要么顺序执行成功,要么就回滚 原则ACID Atomicity原子性 确保事务内的所有操作都成功完成,否则事务将被中止在故障点,以前的操作将回滚到以前的状态。 Consistency一致性 数据库的修改是一致的 Isolation隔离性 事务是彼此独立的 Durability可靠性 确保事务提交后,结果永久存在 隔离性 隔离性可以防止多个事务并发执行时由于交叉执行而导致数据的不一致。事务隔离分为不同级别,包括 读未提交(Read uncommitted)–不严格 读提交(read committed) 可重复读(repeatable read)–默认级别(避免幻读) 串行化(Serializable)–最严格 没有隔离性的问题1.脏读12update account set money=money+100 where name=’B’;update account set money=money - 100 where name=’A’; 当执行第一条语句的时候,事务没有提交,那么来读B的账户钱都多了100块 脏读:读取了另一个事务未提交的数据 2.不可重复读情景:多次读同一个数据的时候,这个数据被别人改了,导致结果不一致 3.幻读幻读和不可重复读一样,读取到了另外一条已经提交的事务,所不同的是它针对的是一批数据的整体 实现方式自动方式beginTransactionScope(scope, ctx) 1234567const result = await app.mysql.beginTransactionScope(async conn => { // don't commit or rollback by yourself await conn.insert(table, row1); await conn.update(table, row2); return { success: true };}, ctx); // if error throw on scope, will auto rollback 手动方式beginTransaction 12345678910const conn = await app.mysql.beginTransaction(); // 初始化事务try { await conn.insert(table, row1); // 第一步操作 await conn.update(table, row2); // 第二步操作 await conn.commit(); // 提交事务} catch (err) { // error, rollback await conn.rollback(); // 一定记得捕获异常后回滚事务!! throw err;} 表达式Literalapp.mysql.literals.now 查看数据库事务隔离性级别 1select @@tx_isolation;","raw":null,"content":null,"categories":[{"name":"mysql","slug":"mysql","permalink":"http://zehai.info/categories/mysql/"}],"tags":[]},{"title":"bashrc","slug":"2019-03-15-bashrc","date":"2019-03-15T10:44:22.000Z","updated":"2021-07-27T07:09:41.879Z","comments":true,"path":"2019/03/15/2019-03-15-bashrc/","link":"","permalink":"http://zehai.info/2019/03/15/2019-03-15-bashrc/","excerpt":"","text":"where通常在home目录下的一个隐藏文件,访问可以1vim ~/.bashrc whatbash 在每次启动时都会加载 .bashrc 文件的内容。每个用户的 home 目录都有这个 shell 脚本。它用来存储并加载你的终端配置和环境变量 end12//更新修改source ~/.bashrc","raw":null,"content":null,"categories":[],"tags":[]},{"title":"安装mysql服务以及常见问题解决","slug":"2019-03-14-ubuntu安装mysql服务以及常见问题解决","date":"2019-03-14T06:22:39.000Z","updated":"2021-07-27T07:09:41.879Z","comments":true,"path":"2019/03/14/2019-03-14-ubuntu安装mysql服务以及常见问题解决/","link":"","permalink":"http://zehai.info/2019/03/14/2019-03-14-ubuntu%E5%AE%89%E8%A3%85mysql%E6%9C%8D%E5%8A%A1%E4%BB%A5%E5%8F%8A%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3/","excerpt":"","text":"安装sudo apt-get update sudo apt-get install mysql-server 解决远程连接 tips本人使用环境ubuntu16 完成安装后,远程连接你会发现2003报错,此时,你对 /etc/mysql/mysql.conf.d/ 文件夹中打开 mysqld.cnf文件修改即可 修改内容将#bind-address = 127.0.0.1 原本没有注释,进行注释 然后你重新远程连接mysql直接变成1130的拒绝访问服务,接下来你要在服务器端登录mysql,执行 进入数据库 mysql -u root -p 切换数据库, mysql>use mysql; 查看root账号的登录权限, mysql>select host, user from user; 修改登录权限 mysql>update user set host = ‘%’ where user = ‘root’; 刷新,生效,最后一步,至关重要 mysql>flush privileges;","raw":null,"content":null,"categories":[],"tags":[]},{"title":"SpringbootMQ","slug":"2019-03-13-SpringbootMQ","date":"2019-03-13T13:11:53.000Z","updated":"2021-07-27T07:09:41.879Z","comments":true,"path":"2019/03/13/2019-03-13-SpringbootMQ/","link":"","permalink":"http://zehai.info/2019/03/13/2019-03-13-SpringbootMQ/","excerpt":"","text":"what is MQ如果想知道MQ的详细知识可以看我之前的为什么使用消息队列MQ 这里选择最重要的提一下:MQ即消息队列,用来实现程序的异步和解耦,起到消息缓冲,消息分发。通俗来讲就是一个医院(服务器)里面有多个医生(线程或进程),让病人都排队(消息缓冲),有的去A部门,有的去B部门(消息分发)。 成员RabbitMQRabbitMQ是实现AMQP(高级消息队列协议Advanced Message Queuing Protocol)的消息中间件的一种,Feature就是组件之间解耦,病人排他的队,医生看他的病人,至于怎么排,医生不用操心,至于怎么看病,病人不用操心,都交给MQ 术语:面向消息,队列,路由(点对点/发布订阅),可靠安全","raw":null,"content":null,"categories":[{"name":"high_availability","slug":"high-availability","permalink":"http://zehai.info/categories/high-availability/"}],"tags":[{"name":"MQ","slug":"MQ","permalink":"http://zehai.info/tags/MQ/"}]},{"title":"Java问题排查工具","slug":"2019-03-13-Java问题排查工具","date":"2019-03-13T06:22:53.000Z","updated":"2021-07-27T07:09:41.878Z","comments":true,"path":"2019/03/13/2019-03-13-Java问题排查工具/","link":"","permalink":"http://zehai.info/2019/03/13/2019-03-13-Java%E9%97%AE%E9%A2%98%E6%8E%92%E6%9F%A5%E5%B7%A5%E5%85%B7/","excerpt":"","text":"一下文字摘自JAVA公众号 Linux命令类tail最常用的tail -f 1tail -300f shopbase.log #倒数300行并进入实时监听文件写入模式 grep12345678910grep forest f.txt #文件查找grep forest f.txt cpf.txt #多文件查找grep 'log' /home/admin -r -n #目录下查找所有符合关键字的文件cat f.txt | grep -i shopbase grep 'shopbase' /home/admin -r -n --include *.{vm,java} #指定文件后缀grep 'shopbase' /home/admin -r -n --exclude *.{vm,java} #反匹配seq 10 | grep 5 -A 3 #上匹配seq 10 | grep 5 -B 3 #下匹配seq 10 | grep 5 -C 3 #上下匹配,平时用这个就妥了cat f.txt | grep -c 'SHOPBASE' awk1 基础命令 123456awk '{print $4,$6}' f.txtawk '{print NR,$0}' f.txt cpf.txt awk '{print FNR,$0}' f.txt cpf.txtawk '{print FNR,FILENAME,$0}' f.txt cpf.txtawk '{print FILENAME,"NR="NR,"FNR="FNR,"$"NF"="$NF}' f.txt cpf.txtecho 1:2:3:4 | awk -F: '{print $1,$2,$3,$4}' 2 匹配 1234awk '/ldb/ {print}' f.txt #匹配ldbawk '!/ldb/ {print}' f.txt #不匹配ldbawk '/ldb/ && /LISTEN/ {print}' f.txt #匹配ldb和LISTENawk '$5 ~ /ldb/ {print}' f.txt #第五列匹配ldb 3 内建变量 NR:NR表示从awk开始执行后,按照记录分隔符读取的数据次数,默认的记录分隔符为换行符,因此默认的就是读取的数据行数,NR可以理解为Number of Record的缩写。 FNR:在awk处理多个输入文件的时候,在处理完第一个文件后,NR并不会从1开始,而是继续累加,因此就出现了FNR,每当处理一个新文件的时候,FNR就从1开始计数,FNR可以理解为File Number of Record。 NF: NF表示目前的记录被分割的字段的数目,NF可以理解为Number of Field。 find12345678910111213sudo -u admin find /home/admin /tmp /usr -name \\*.log(多个目录去找)find . -iname \\*.txt(大小写都匹配)find . -type d(当前目录下的所有子目录)find /usr -type l(当前目录下所有的符号链接)find /usr -type l -name "z*" -ls(符号链接的详细信息 eg:inode,目录)find /home/admin -size +250000k(超过250000k的文件,当然+改成-就是小于了)find /home/admin f -perm 777 -exec ls -l {} \\; (按照权限查询文件)find /home/admin -atime -1 1天内访问过的文件find /home/admin -ctime -1 1天内状态改变过的文件 find /home/admin -mtime -1 1天内修改过的文件find /home/admin -amin -1 1分钟内访问过的文件find /home/admin -cmin -1 1分钟内状态改变过的文件 find /home/admin -mmin -1 1分钟内修改过的文件 pgm批量查询vm-shopbase满足条件的日志 1pgm -A -f vm-shopbase 'cat /home/admin/shopbase/logs/shopbase.log.2017-01-17|grep 2069861630' tsartsar是咱公司自己的采集工具。很好用, 将历史收集到的数据持久化在磁盘上,所以我们快速来查询历史的系统数据。当然实时的应用情况也是可以查询的啦。大部分机器上都有安装。 1tsar ##可以查看最近一天的各项指标 1tsar --live ##可以查看实时指标,默认五秒一刷 1tsar -d 20161218 ##指定查看某天的数据,貌似最多只能看四个月的数据 1234tsar --memtsar --loadtsar --cpu##当然这个也可以和-d参数配合来查询某天的单个指标的情况 toptop除了看一些基本信息之外,剩下的就是配合来查询vm的各种问题了 123ps -ef | grep javatop -H -p pid 获得线程10进制转16进制后jstack去抓看这个线程到底在干啥 其他12netstat -nat|awk '{print $6}'|sort|uniq -c|sort -rn #查看当前连接,注意close_wait偏高的情况,比如如下 排查利器btrace首当其冲的要说的是btrace。真是生产环境&预发的排查问题大杀器。 简介什么的就不说了。直接上代码干 查看当前谁调用了ArrayList的add方法,同时只打印当前ArrayList的size大于500的线程调用栈 @OnMethod(clazz = “java.util.ArrayList”, method=”add”, location = @Location(value = Kind.CALL, clazz = “/./“, method = “/./“))public static void m(@ProbeClassName String probeClass, @ProbeMethodName String probeMethod, @TargetInstance Object instance, @TargetMethodOrField String method) { 1234567if(getInt(field("java.util.ArrayList", "size"), instance) > 479){ println("check who ArrayList.add method:" + probeClass + "#" + probeMethod + ", method:" + method + ", size:" + getInt(field("java.util.ArrayList", "size"), instance)); jstack(); println(); println("==========================="); println();} } 监控当前服务方法被调用时返回的值以及请求的参数 @OnMethod(clazz = “com.taobao.sellerhome.transfer.biz.impl.C2CApplyerServiceImpl”, method=”nav”, location = @Location(value = Kind.RETURN))public static void mt(long userId, int current, int relation, String check, String redirectUrl, @Return AnyType result) { 1println("parameter# userId:" + userId + ", current:" + current + ", relation:" + relation + ", check:" + check + ", redirectUrl:" + redirectUrl + ", result:" + result); } 其他功能集团的一些工具或多或少都有,就不说了。感兴趣的请移步。https://github.com/btraceio/btrace 注意: 经过观察,1.3.9的release输出不稳定,要多触发几次才能看到正确的结果 正则表达式匹配trace类时范围一定要控制,否则极有可能出现跑满CPU导致应用卡死的情况 由于是字节码注入的原理,想要应用恢复到正常情况,需要重启应用。 GreysGreys是@杜琨的大作吧。说几个挺棒的功能(部分功能和btrace重合): sc -df xxx: 输出当前类的详情,包括源码位置和classloader结构 trace class method: 相当喜欢这个功能! 很早前可以早JProfiler看到这个功能。打印出当前方法调用的耗时情况,细分到每个方法。对排查方法性能时很有帮助,比如我之前这篇就是使用了trace命令来的:http://www.atatech.org/articles/52947。 其他功能部分和btrace重合,可以选用,感兴趣的请移步。http://www.atatech.org/articles/26247 另外相关联的是arthas,他是基于Greys的,感兴趣的再移步http://mw.alibaba-inc.com/products/arthas/docs/middleware-container/arthas.wiki/home.html?spm=a1z9z.8109794.header.32.1lsoMc javOSize就说一个功能classes:通过修改了字节码,改变了类的内容,即时生效。 所以可以做到快速的在某个地方打个日志看看输出,缺点是对代码的侵入性太大。但是如果自己知道自己在干嘛,的确是不错的玩意儿。 其他功能Greys和btrace都能很轻易做的到,不说了。 可以看看我之前写的一篇javOSize的简介http://www.atatech.org/articles/38546官网请移步http://www.javosize.com/ JProfiler之前判断许多问题要通过JProfiler,但是现在Greys和btrace基本都能搞定了。再加上出问题的基本上都是生产环境(网络隔离),所以基本不怎么使用了,但是还是要标记一下。官网请移步https://www.ej-technologies.com/products/jprofiler/overview.html 大杀器eclipseMAT可作为eclipse的插件,也可作为单独的程序打开。详情请移步http://www.eclipse.org/mat/ zprofiler集团内的开发应该是无人不知无人不晓了。简而言之一句话:有了zprofiler还要mat干嘛详情请移步zprofiler.alibaba-inc.com java三板斧,噢不对,是七把jps我只用一条命令: 1sudo -u admin /opt/taobao/java/bin/jps -mlvV jstack普通用法: 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jstack 2815 native+java栈: 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jstack -m 2815 jinfo可看系统启动的参数,如下 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jinfo -flags 2815 jmap两个用途 1.查看堆的情况 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jmap -heap 2815 2.dump 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jmap -dump:live,format=b,file=/tmp/heap2.bin 2815 或者 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jmap -dump:format=b,file=/tmp/heap3.bin 2815 3.看看堆都被谁占了? 再配合zprofiler和btrace,排查问题简直是如虎添翼 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jmap -histo 2815 | head -10 jstatjstat参数众多,但是使用一个就够了 1sudo -u admin /opt/taobao/install/ajdk-8_1_1_fp1-b52/bin/jstat -gcutil 2815 1000 jdb时至今日,jdb也是经常使用的。jdb可以用来预发debug,假设你预发的java_home是/opt/taobao/java/,远程调试端口是8000.那么sudo -u admin /opt/taobao/java/bin/jdb -attach 8000. 出现以上代表jdb启动成功。后续可以进行设置断点进行调试。具体参数可见oracle官方说明http://docs.oracle.com/javase/7/docs/technotes/tools/windows/jdb.html CHLSDBCHLSDB感觉很多情况下可以看到更好玩的东西,不详细叙述了。 查询资料听说jstack和jmap等工具就是基于它的。 1sudo -u admin /opt/taobao/java/bin/java -classpath /opt/taobao/java/lib/sa-jdi.jar sun.jvm.hotspot.CLHSDB 更详细的可见R大此贴http://rednaxelafx.iteye.com/blog/1847971 plugin of intellij ideakey promoter快捷键一次你记不住,多来几次你总能记住了吧? maven helper分析maven依赖的好帮手。 VM options 你的类到底是从哪个文件加载进来的? 123-XX:+TraceClassLoading结果形如[Loaded java.lang.invoke.MethodHandleImpl$Lazy from D:\\programme\\jdk\\jdk8U74\\jre\\lib\\rt.jar] 应用挂了输出dump文件 12-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/admin/logs/java.hprof集团的vm参数里边基本都有这个选项 jar包冲突把这个单独写个大标题不过分吧?每个人或多或少都处理过这种烦人的case。我特么下边这么多方案不信就搞不定你? mvn dependency:tree > ~/dependency.txt打出所有依赖 mvn dependency:tree -Dverbose -Dincludes=groupId:artifactId只打出指定groupId和artifactId的依赖关系 -XX:+TraceClassLoadingvm启动脚本加入。在tomcat启动脚本中可见加载类的详细信息 -verbosevm启动脚本加入。在tomcat启动脚本中可见加载类的详细信息 greys:scgreys的sc命令也能清晰的看到当前类是从哪里加载过来的 tomcat-classloader-locate通过以下url可以获知当前类是从哪里加载的curl http://localhost:8006/classloader/locate?class=org.apache.xerces.xs.XSObject ALI-TOMCAT带给我们的惊喜(感谢@务观) 列出容器加载的jar列表 curl http://localhost:8006/classloader/jars 列出当前当当前类加载的实际jar包位置,解决类冲突时有用 curl http://localhost:8006/classloader/locate?class=org.apache.xerces.xs.XSObject 其他gprefhttp://www.atatech.org/articles/33317 dmesg如果发现自己的java进程悄无声息的消失了,几乎没有留下任何线索,那么dmesg一发,很有可能有你想要的。 1sudo dmesg|grep -i kill|less 去找关键字oom_killer。找到的结果类似如下: 12345[6710782.021013] java invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_scoe_adj=0[6710782.070639] [<ffffffff81118898>] ? oom_kill_process+0x68/0x140 [6710782.257588] Task in /LXC011175068174 killed as a result of limit of /LXC011175068174 [6710784.698347] Memory cgroup out of memory: Kill process 215701 (java) score 854 or sacrifice child [6710784.707978] Killed process 215701, UID 679, (java) total-vm:11017300kB, anon-rss:7152432kB, file-rss:1232kB 以上表明,对应的java进程被系统的OOM Killer给干掉了,得分为854.解释一下OOM killer(Out-Of-Memory killer),该机制会监控机器的内存资源消耗。当机器内存耗尽前,该机制会扫描所有的进程(按照一定规则计算,内存占用,时间等),挑选出得分最高的进程,然后杀死,从而保护机器。 dmesg日志时间转换公式:log实际时间=格林威治1970-01-01+(当前时间秒数-系统启动至今的秒数+dmesg打印的log时间)秒数: 1date -d "1970-01-01 UTC `echo "$(date +%s)-$(cat /proc/uptime|cut -f 1 -d' ')+12288812.926194"|bc ` seconds" 剩下的,就是看看为什么内存这么大,触发了OOM-Killer了。 新技能getRateLimiter想要精细的控制QPS? 比如这样一个场景,你调用某个接口,对方明确需要你限制你的QPS在400之内你怎么控制?这个时候RateLimiter就有了用武之地。详情可移步http://ifeve.com/guava-ratelimite","raw":null,"content":null,"categories":[],"tags":[]},{"title":"JumpGame","slug":"2019-03-13-JumpGame","date":"2019-03-13T01:52:33.000Z","updated":"2021-07-27T07:09:41.878Z","comments":true,"path":"2019/03/13/2019-03-13-JumpGame/","link":"","permalink":"http://zehai.info/2019/03/13/2019-03-13-JumpGame/","excerpt":"","text":"problem Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents your maximum jump length at that position. Determine if you are able to reach the last index. Example 1: 123Input: [2,3,1,1,4]Output: trueExplanation: Jump 1 step from index 0 to 1, then 3 steps to the last index. Example 2: 1234Input: [3,2,1,0,4]Output: falseExplanation: You will always arrive at index 3 no matter what. Its maximum jump length is 0, which makes it impossible to reach the last index. key本题有两个易理解错的地方 达到最后一个index或者超过最后一个index是可以的 【2,5,0,0】第一个2可以跳两步,然后我们在5的基础上跳五步 本题采用贪心算法,算出局部最优解就可以了,当然也可以考虑dp,但本题没有这个必要 solution123456public boolean canJump(int[] nums) { int reach = nums[0]; for(int i = 1; i < nums.length && reach >= i; i++) if(i + nums[i] > reach) reach = i + nums[i]; return reach >= (nums.length-1) ? true : false; } perfect1I'm the perfect","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"SpiralMatrix","slug":"2019-03-12-SpiralMatrix","date":"2019-03-12T13:33:48.000Z","updated":"2021-07-27T07:09:41.877Z","comments":true,"path":"2019/03/12/2019-03-12-SpiralMatrix/","link":"","permalink":"http://zehai.info/2019/03/12/2019-03-12-SpiralMatrix/","excerpt":"","text":"54.problem Given a matrix of m x n elements (m rows, n columns), return all elements of the matrix in spiral order. Example 1: 1234567Input:[ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ]]Output: [1,2,3,6,9,8,7,4,5] Example 2: 1234567Input:[ [1, 2, 3, 4], [5, 6, 7, 8], [9,10,11,12]]Output: [1,2,3,4,8,12,11,10,9,5,6,7] key很简单的循环输出的例子,从【0,0】的位置顺时针扫一圈,然后缩小一圈,继续扫描,不过有一个细节就是第三次第四循环前,要判断一下,防止最后一层循环只有一行 solution123456789101112131415161718192021222324252627282930public List<Integer> spiralOrder(int[][] matrix) { List<Integer> ans = new ArrayList<Integer>(); if (matrix.length == 0) return ans; int rs = 0, re = matrix.length - 1;// rowStart rowEnd int cs = 0, ce = matrix[0].length - 1;// columnStart columnEnd while (rs <= re && cs <= ce) { for (int i = cs; i <= ce; i++) { ans.add(matrix[rs][i]); } for(int j=rs+1;j<=re;j++) { ans.add(matrix[j][ce]); } if(rs<re&&cs<ce) { for(int k=ce-1;k>cs;k--) { ans.add(matrix[re][k]); } for(int l=re;l>rs;l--) { ans.add(matrix[l][cs]); } } rs++; re--; cs++; ce--; } return ans; } perfect1yehh,I'm the perfect","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Medium","slug":"Medium","permalink":"http://zehai.info/tags/Medium/"}]},{"title":"如何设置hexo的favico","slug":"2019-03-12-如何设置hexo的favico","date":"2019-03-12T10:35:17.000Z","updated":"2021-07-27T07:09:41.877Z","comments":true,"path":"2019/03/12/2019-03-12-如何设置hexo的favico/","link":"","permalink":"http://zehai.info/2019/03/12/2019-03-12-%E5%A6%82%E4%BD%95%E8%AE%BE%E7%BD%AEhexo%E7%9A%84favico/","excerpt":"","text":"solutionsource 下放置32*32的favico.icon文件并在根目录的_config.yml中设置 favicon: /favicon.ico","raw":null,"content":null,"categories":[{"name":"others","slug":"others","permalink":"http://zehai.info/categories/others/"}],"tags":[]},{"title":"如何将域名绑定到hexo","slug":"2019-03-12-如何将域名绑定到hexo","date":"2019-03-12T09:44:20.000Z","updated":"2021-07-27T07:09:41.877Z","comments":true,"path":"2019/03/12/2019-03-12-如何将域名绑定到hexo/","link":"","permalink":"http://zehai.info/2019/03/12/2019-03-12-%E5%A6%82%E4%BD%95%E5%B0%86%E5%9F%9F%E5%90%8D%E7%BB%91%E5%AE%9A%E5%88%B0hexo/","excerpt":"","text":"problem很多人可能都有hexo博客,会有一个githubname.github.io的地址,然后自己可能想去买一个域名,方便记忆,但是解析后迟迟用不了,该文章就来详细描述一下步骤。 solution1.拥有一个githubname.github.io可以正常访问的域名,如我的GitHub博客:https://shawngoethe.github.io 2.购买域名,个人推荐阿里云,首年年费比较便宜,适合个人折腾,博客建议com,me,info,pro(专家),mobi(kindle电子书的格式),再不济可以选择tech,cc之类的,国外可以参考Linost之类的网页 3.购买域名进行实名认证,否则无法使用 4.进行解析:记录类型CNAME(进行转发),主机记录@(避免主机记录选择www,输入域名要多写www),记录值为shawngoethe.github.io,TTL选择10分钟就可以了 上述方法属于将我购买的zehai.info转发到了shawngoethe.github.io,还可以“记录类型”选择“A”来填写IPv4的地址,地址可以通过ping shawngoethe.github.io 来获得 5.修改代码:很多人忽视了要在源代码/hexoblog/source/目录下添加CNAME文件(注意没有尾缀),然后在该文件下填写zehai.info(可以兼容,www.zehai.info 和 zehai.info 两种访问方式,但如果填写 www.zehai.info 则只支持 www.zehai.info 一种访问方式) 6.等十分钟左右,让解析生效,好了,你可以访问我的hexo获取更多内容","raw":null,"content":null,"categories":[{"name":"others","slug":"others","permalink":"http://zehai.info/categories/others/"}],"tags":[]},{"title":"MaximumSubarray","slug":"2019-03-10-MaximumSubarray","date":"2019-03-10T13:39:50.000Z","updated":"2021-07-27T07:09:41.876Z","comments":true,"path":"2019/03/10/2019-03-10-MaximumSubarray/","link":"","permalink":"http://zehai.info/2019/03/10/2019-03-10-MaximumSubarray/","excerpt":"","text":"problem Maximum Subarray Easy Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum. Example: 123Input: [-2,1,-3,4,-1,2,1,-5,4],Output: 6Explanation: [4,-1,2,1] has the largest sum = 6. Follow up: If you have figured out the O(n) solution, try coding another solution using the divide and conquer approach, which is more subtle. key我们定义一个和为第一位数,然后用curSum来保存递增量 ps ans–>answer cur–>cursor solution12345678910111213class Solution { public int maxSubArray(int[] nums) { int ans=nums[0], curSum=0; for (int i=0; i<nums.length; i++) { curSum = curSum + nums[i]; ans = Math.max(ans, curSum); curSum = Math.max(0, curSum); } return ans; }} perfect12345678910class Solution { public int maxSubArray(int[] nums) { int dp = nums[0], maxSum=nums[0]; for (int i=1; i<nums.length; i++) { dp = dp<0?nums[i]:nums[i]+dp; maxSum=Math.max(maxSum, dp); } return maxSum; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[]},{"title":"pow(x,n)","slug":"2019-03-10-pow-x-n","date":"2019-03-10T13:10:48.000Z","updated":"2021-07-27T07:09:41.876Z","comments":true,"path":"2019/03/10/2019-03-10-pow-x-n/","link":"","permalink":"http://zehai.info/2019/03/10/2019-03-10-pow-x-n/","excerpt":"","text":"problem \\50. Pow(x, n) Medium Implement pow(x, n), which calculates x raised to the power n (xn). Example 1: 12Input: 2.00000, 10Output: 1024.00000 Example 2: 12Input: 2.10000, 3Output: 9.26100 Example 3: 123Input: 2.00000, -2Output: 0.25000Explanation: 2-2 = 1/22 = 1/4 = 0.25 Note: -100.0 < x < 100.0 n is a 32-bit signed integer, within the range [−231, 231 − 1] solution12345678910111213141516171819202122public double myPow(double x, int n) { long N = n; if (N < 0) { x = 1 / x; N = -N; } double ans = 1; double cur = x;//2 for (long i = N; i > 0; i /= 2) { if (i % 2 == 1) ans = ans * cur; cur = cur * cur; } return ans; }//偷懒方法public double myPow(double x, int n) { return Math.pow(x, n); } key其实先使用了偷懒的方法,调用Math库的pow方法,然后写过一版 123for(long i=N;i>0;i--) { ans=ans*cur;} 这个会直接报超时的错误,因为的计算量会非常大,在计算(-1.00000,-2147483648)时候超时了,虽然我们可以通过判断x来避免这一个超时,但是我想到了,可以通过n/2来迅速减少相乘的次数。时间大概是8ms perfect1234567891011121314151617181920212223242526272829class Solution { public double findPower(double x,long n){ if(n == Long.valueOf(1)) return x; if(n % 2 == 0){ double half_pow = findPower(x,n/2); return half_pow * half_pow; }else{ double half_pow = findPower(x,(n-1)/2); return half_pow * half_pow * x; } } public double myPow(double x, int n) { if( n==0 ) return 1; long n_long = (long) n; if( n > 0 ) return findPower(x,n); x = 1 / x; long n_long_abs = (long) Math.abs((long)n); if(n_long_abs == 1) return x; return findPower(x,n_long_abs); }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[]},{"title":"我在一个不属于我的地方游荡","slug":"2019-03-06-我在一个不属于我的地方游荡","date":"2019-03-06T08:39:50.000Z","updated":"2021-07-27T07:09:41.876Z","comments":true,"path":"2019/03/06/2019-03-06-我在一个不属于我的地方游荡/","link":"","permalink":"http://zehai.info/2019/03/06/2019-03-06-%E6%88%91%E5%9C%A8%E4%B8%80%E4%B8%AA%E4%B8%8D%E5%B1%9E%E4%BA%8E%E6%88%91%E7%9A%84%E5%9C%B0%E6%96%B9%E6%B8%B8%E8%8D%A1/","excerpt":"","text":"我在一个不属于我的地方游荡2019年的3月6日,距离我上一份工作离职,已经37天,日子过得虽然不好,但也不算差,好消息是阿len还陪着我,坏消息是一直过着异地恋的生活,不知道是为什么,是我进入了焦虑的状态,每天的日常就是投简历,思考人生,发呆,看up主秀恩爱 其实回顾前三次的找工作经历,哪次不是觉得自己快要变成咸鱼了,然后收到了一两个offer,不过今年的不同点就是,有三家,我已经过了用人单位的面试,却被卡在了人力资源部门的审核上,我时常恨自己的学历,却无法去原谅曾经高考的自己,写这篇文章的时候,我刚刚从清华的北门进入校园,下午两点半的宿舍区,没有一点噪音,天空的乌鸦鸣叫在空旷的校园回荡,仿佛,在感叹今日的好天气,阳光那么明亮,洒在光秃秃的树枝上。 17年考研复习期间埋下来的雷,最终还是爆炸了,18年,19年,20年,似乎时间过得很快,我丢失了那一次机会后,我似乎再也没有机会去投入身心去复习,每天的大脑里更多的是,好累啊,好烦啊,什么时候发工资啊。越生是怀念起无忧无虑的本科生活,天天不用担心我是谁,我在哪儿,学什么,可能唯一需要费点脑经的就是,中午吃啥 而现在,我走在一个不属于我的世界里,熟悉又陌生,我什么都不知道,因为我不知道我要干嘛,前方一个是找不到工作的工作方向,一个是会饿死的考研方向,世界很精彩,我却显得那么渺小,就深深想起来用人部门发信息和我说: 从技术层面上,我认为从工作年限上,你的水平是够的。对于候选人的学历背景上,央企有自身的痼疾,用人部门的话语权不一定大于人力部门,这个你也无须介怀。 工作的前三年对于一个工程师来说是至关重要的,如果喜欢这条路,就多花点时间,加油!江湖不大,有缘再见! 不知道接下来应该做什么,或许这就是应试教育的悲哀,我也只能许愿,三月份能够拿到一个不错的offer,先活下来,我是子苏,一个快要得抑郁症的人。","raw":null,"content":null,"categories":[{"name":"mood","slug":"mood","permalink":"http://zehai.info/categories/mood/"}],"tags":[{"name":"diary","slug":"diary","permalink":"http://zehai.info/tags/diary/"}]},{"title":"docker+springboot","slug":"2019-03-05-docker-springboot","date":"2019-03-05T13:26:10.000Z","updated":"2021-07-27T07:09:41.875Z","comments":true,"path":"2019/03/05/2019-03-05-docker-springboot/","link":"","permalink":"http://zehai.info/2019/03/05/2019-03-05-docker-springboot/","excerpt":"","text":"what在 pom.xml-properties中添加 Docker 镜像名称 123<properties> <docker.image.prefix>springboot</docker.image.prefix></properties> plugins 中添加 Docker 构建插件: 1234567891011121314151617181920212223242526<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <!-- Docker maven plugin --> <plugin> <groupId>com.spotify</groupId> <artifactId>docker-maven-plugin</artifactId> <version>1.0.0</version> <configuration> <imageName>${docker.image.prefix}/${project.artifactId}</imageName> <dockerDirectory>src/main/docker</dockerDirectory> <resources> <resource> <targetPath>/</targetPath> <directory>${project.build.directory}</directory> <include>${project.build.finalName}.jar</include> </resource> </resources> </configuration> </plugin> <!-- Docker maven plugin --> </plugins></build> 在目录src/main/docker下创建 Dockerfile 文件,Dockerfile 文件用来说明如何来构建镜像。 1234FROM openjdk:8-jdk-alpineVOLUME /tmpADD spring-boot-docker-1.0.jar app.jarENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"] 这个 Dockerfile 文件很简单,构建 Jdk 基础环境,添加 Spring Boot Jar 到镜像中,简单解释一下: FROM ,表示使用 Jdk8 环境 为基础镜像,如果镜像不是本地的会从 DockerHub 进行下载 VOLUME ,VOLUME 指向了一个/tmp的目录,由于 Spring Boot 使用内置的Tomcat容器,Tomcat 默认使用/tmp作为工作目录。这个命令的效果是:在宿主机的/var/lib/docker目录下创建一个临时文件并把它链接到容器中的/tmp目录 ADD ,拷贝文件并且重命名 ENTRYPOINT ,为了缩短 Tomcat 的启动时间,添加java.security.egd的系统属性指向/dev/urandom作为 ENTRYPOINT 这样 Spring Boot 项目添加 Docker 依赖就完成了。","raw":null,"content":null,"categories":[{"name":"docker","slug":"docker","permalink":"http://zehai.info/categories/docker/"}],"tags":[]},{"title":"dockerfile","slug":"2019-03-04-dockerfile","date":"2019-03-04T13:09:55.000Z","updated":"2021-07-27T07:09:41.875Z","comments":true,"path":"2019/03/04/2019-03-04-dockerfile/","link":"","permalink":"http://zehai.info/2019/03/04/2019-03-04-dockerfile/","excerpt":"","text":"what通过dockerfile写入程序、库、资源、配置参数等,来生成image文件,可以类比node的package.json或者nginx.conf的文件 format1234567891011121314151617181920## Dockerfile文件格式# This dockerfile uses the ubuntu image# VERSION 2 - EDITION 1# Author: docker_user# Command format: Instruction [arguments / command] .. # 1、第一行必须指定 基础镜像信息FROM ubuntu # 2、维护者信息MAINTAINER docker_user docker_user@email.com # 3、镜像操作指令RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.listRUN apt-get update && apt-get install -y nginxRUN echo "\\ndaemon off;" >> /etc/nginx/nginx.conf # 4、容器启动执行指令CMD /usr/sbin/nginx build image1docker build 运行该命令时,根据dockerfile文件及上下文构建新的docker镜像,其中上下文是指dockerfile所在的本地路径或者网络路径url。 ps:dokcer build时候,会在后台守护进程daemon中进行,而不是cli(common line interface)中,构建前,构建进程将全部内容递归放到守护进程,将dockerfile文件放在(本就在空目录下构建)该目录下 还可以通过.dockerignore的文件来忽略上下文目录中的部分文件和目录,同.gitignore 通过-f命令指定文件位置,如: 1docker buid -f /path/to/dockerfile . image tag镜像标签docker build -t ngix/v3 cacheDocker 守护进程会一条一条的执行 Dockerfile 中的指令,而且会在每一步提交并生成一个新镜像,最后会输出最终镜像的ID。生成完成后,Docker 守护进程会自动清理你发送的上下文。 Dockerfile文件中的每条指令会被独立执行,并会创建一个新镜像,RUN cd /tmp等命令不会对下条指令产生影响。 Docker 会重用已生成的中间镜像,以加速docker build的构建速度。 example12345678910111213141516171819mkdir mynginxcd mynginxvi Dockerfile//制作dokcerfileFROM nginxRUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html//save && run this code in mynginxdocker build -t nginx:v1 .//v1 后面有一个空格和一个点//点代表当前目录//查看imagedokcer images//rundokcer run --name docker_nginx_v1 -d -p 80:80 nginx:v1//docker_nginx_v1为容器名//nginx:v1为image名","raw":null,"content":null,"categories":[{"name":"Linux","slug":"Linux","permalink":"http://zehai.info/categories/Linux/"}],"tags":[{"name":"docker","slug":"docker","permalink":"http://zehai.info/tags/docker/"}]},{"title":"NoSql-introduction","slug":"2019-03-04-NoSql-introduction","date":"2019-03-04T13:01:07.000Z","updated":"2021-07-27T07:09:41.875Z","comments":true,"path":"2019/03/04/2019-03-04-NoSql-introduction/","link":"","permalink":"http://zehai.info/2019/03/04/2019-03-04-NoSql-introduction/","excerpt":"","text":"why nosqlNoSql可以处理结构化,非结构化的数据,可以水平伸缩,在实时和批量数据分析中具有优势 difference","raw":null,"content":null,"categories":[{"name":"database","slug":"database","permalink":"http://zehai.info/categories/database/"}],"tags":[]},{"title":"docker","slug":"2019-03-04-docker","date":"2019-03-04T03:07:16.000Z","updated":"2021-07-27T07:09:41.875Z","comments":true,"path":"2019/03/04/2019-03-04-docker/","link":"","permalink":"http://zehai.info/2019/03/04/2019-03-04-docker/","excerpt":"","text":"why docker 解决“在我的机子上可以正常工作”的问题 运维更好地管理服务 更好地迁移和拓展(任意平台运行) what is dockerdocker属于Linux容器的一种封装,和VM类似,但他不像VM一样虚拟在操作系统之上,而是和操作系统平级,程序运行在容器里,就和在真实的物理机上面运行一样 简单一点理解就是:程序运行在docker上和真机上几乎误差,将程序包装起来管理 名词解释 daemon:守护进程 Client:命令行 image:镜像,用来创建容器 container:运行组件,启动的image就是容器 registry:管理image的地方 install #ubuntu $ sudo apt-get install docker-ce docker-ce-cli containerd.io HelloWorld 123sudo docker container run hello-world//他会先找本地,然后再去仓库下载//该过程将image变成容器,即image文件产生container文件 常用命令 docker pull image_name//拉取镜像 docker images//本地镜像 docker rmi xxx//remove image docker ps//view what docker is running docker ps -a //以下使用cn代替 container_name/container_id docker start|stop|restart cn docker attach cn//启动后进入容器 dokcer rm cn docker info docker search nginx","raw":null,"content":null,"categories":[{"name":"Linux","slug":"Linux","permalink":"http://zehai.info/categories/Linux/"}],"tags":[{"name":"docker","slug":"docker","permalink":"http://zehai.info/tags/docker/"}]},{"title":"LineageOS16.0-RELEASE","slug":"2019-03-02-LineageOS16-0-RELEASE","date":"2019-03-02T12:12:52.000Z","updated":"2021-07-27T07:09:41.873Z","comments":true,"path":"2019/03/02/2019-03-02-LineageOS16-0-RELEASE/","link":"","permalink":"http://zehai.info/2019/03/02/2019-03-02-LineageOS16-0-RELEASE/","excerpt":"","text":"16.0正式发布我们从去年八月开始,努力将我们LineageOs的新特性移植到新版本的安卓上,非常感谢之前版本中的工作者们,我们才能够在这次的版本新特性中投入更多的精力,尤其是,隐私守护(Privacy Guard)和插件(su addon)上收到了了大量的提升建议。通过对Styles API的一些细微更改,他现在可以兼容安卓暗黑模式的默认实现,在未来,越来越多的三方应用将遵循系统风格,这意味着Styles API将允许在跨应用程序时获得更一致的体验。正如我们发布夏季第二次调研结果那样,我们将介绍Trust的新特性,首先是设备锁定时阻止新USB设备连接。请注意,由于基于底层,所以这个特性必须在每个设备底层中启用。Trebuchet现在还可以隐藏app以及在打开app前进行身份验证。该限制也仅在Trebuchet中,并非系统范围。我们认为16.0的分支已经达到了15.1版本的特性测试并做好了发布准备。随着16.0分支成为最新最活跃的分支,在2019.3.1,它将开始日更新构建,并且15.1将会移动到周更新。16.0版本将会从小部分机器开始运行,一些其他的机子如果准备好了,我们也会做一些小改动,开始构建,并通过改动构建脚本来更好地处理我们最新手机的,独特feature,以及由此产生的复杂问题 支持更新名单 Asus BQ Fairphone Google HTC Huawei LeEco Lenovo LG Moto Nextbit Nubia Nvdia OnePlus(my oneplus 5T receive 16.0) Oppo Samsung Sony Wileyfox Wingtech Xiaomi YU ZTE Zuk more 其他热门的ROM MoKee] crDroid MIUI Flyme PixelExperience 原文 Hello LineageOS 16.0We’ve been working hard since August to port our unique features to this new version of Android. Thanks to the major cleanup and refactoring done in the previous version, we were able to focus more on features and reliability this time; in particular, both Privacy Guard and the su addon received a sizeable amount of improvements. With some minor changes made to the Styles API, it is now compatible with what will eventually become the default implementation of dark mode in Android. In the future, more and more third party apps will follow the system style, meaning our Styles API will allow you to have a more coherent experience across apps. As we announced when the Summer Survey 2 results were posted, we will be introducing new features to Trust, beginning with the ability to block new USB device connections when device is locked. Please note that this feature has to be enabled on a per-device basis due to the layer at which this was implemented. Trebuchet is also now able to hide apps and require authentication before opening them. Please note that this restriction is limited to Trebuchet and is not system-wide. We feel that the 16.0 branch has reached feature parity with 15.1 and is ready for initial release. With 16.0 being the most recent and most actively-developed branch, on March 1st, 2019 it will begin receiving builds nightly and 15.1 will be moved to weekly builds. LineageOS 16.0 will be launching with a small selection of devices. Additional devices will begin receiving builds as they are ready and after we make minor change to our build scripts to better handle the unique features, and resulting complications, of the most modern devices. Upgrading to LineageOS 16.0 (Optional) Make a backup of your important data Download the build either from download portal or built in Updater app You can export the downloaded package from the Updater app to the sdcard by long-pressing it and then selecting “Export” in the popup menu Download proper addons packages (GApps, su…) for Android 9.0/Lineage OS 16.0 Make sure your recovery and firmware are up to date Format your system partition Follow the “Installing LineageOS from recovery” section on your device’s installation page Please note that if you’re currently on an official build, you DO NOT need to wipe your device. If you are installing from an unofficial build, you MUST wipe data from recovery before installing.","raw":null,"content":null,"categories":[{"name":"phones","slug":"phones","permalink":"http://zehai.info/categories/phones/"}],"tags":[]},{"title":"java类的加载机制","slug":"2019-03-02-java类的加载机制","date":"2019-03-02T09:41:18.000Z","updated":"2021-07-27T07:09:41.874Z","comments":true,"path":"2019/03/02/2019-03-02-java类的加载机制/","link":"","permalink":"http://zehai.info/2019/03/02/2019-03-02-java%E7%B1%BB%E7%9A%84%E5%8A%A0%E8%BD%BD%E6%9C%BA%E5%88%B6/","excerpt":"","text":"写在最前面:该文章为笔记,来自纯洁的微笑 what is the loading of class类加载即: 将编译class文件中的二进制数据读到内存中方法区,然后在堆区通过java.lang.Class实例化对象,对方法区的数据进行操作 该加载过程包含首次使用加载,以及预加载 加载class文件的方式 本地 网络 zip,jar文件中 数据库 动态编译 类的生命周期","raw":null,"content":null,"categories":[{"name":"Java","slug":"Java","permalink":"http://zehai.info/categories/Java/"}],"tags":[]},{"title":"ShoppingOffers","slug":"2019-03-02-ShoppingOffers","date":"2019-03-02T03:00:46.000Z","updated":"2021-07-27T07:09:41.874Z","comments":true,"path":"2019/03/02/2019-03-02-ShoppingOffers/","link":"","permalink":"http://zehai.info/2019/03/02/2019-03-02-ShoppingOffers/","excerpt":"","text":"problem In LeetCode Store, there are some kinds of items to sell. Each item has a price. However, there are some special offers, and a special offer consists of one or more different kinds of items with a sale price. You are given the each item’s price, a set of special offers, and the number we need to buy for each item. The job is to output the lowest price you have to pay for exactly certain items as given, where you could make optimal use of the special offers. Each special offer is represented in the form of an array, the last number represents the price you need to pay for this special offer, other numbers represents how many specific items you could get if you buy this offer. You could use any of special offers as many times as you want. examples Example 1: 1234567Input: [2,5], [[3,0,5],[1,2,10]], [3,2]Output: 14Explanation: There are two kinds of items, A and B. Their prices are $2 and $5 respectively. In special offer 1, you can pay $5 for 3A and 0BIn special offer 2, you can pay $10 for 1A and 2B. You need to buy 3A and 2B, so you may pay $10 for 1A and 2B (special offer #2), and $4 for 2A. Example 2: 1234567Input: [2,3,4], [[1,1,0,4],[2,2,1,9]], [1,2,1]Output: 11Explanation: The price of A is $2, and $3 for B, $4 for C. You may pay $4 for 1A and 1B, and $9 for 2A ,2B and 1C. You need to buy 1A ,2B and 1C, so you may pay $4 for 1A and 1B (special offer #1), and $3 for 1B, $4 for 1C. You cannot add more items, though only $9 for 2A ,2B and 1C. solution123456789101112131415161718192021222324252627282930313233343536373839404142434445import java.util.ArrayList;import java.util.Arrays;import java.util.List;public class ShoppingOffers { public static void main(String[] args) { /*以下贴出测试方式,因为对ArrayList不熟悉,如有更好的方式,欢迎指出*/ List<Integer> price = new ArrayList<Integer>(); List<List<Integer>> special = new ArrayList<List<Integer>>(); List<Integer> needs = new ArrayList<Integer>(); price.add(0, 2);price.add(1, 5); Integer[][] arr = new Integer[][] {{3,0,5},{1,2,10}}; special.add((List<Integer>)Arrays.asList(arr[0])); special.add((List<Integer>)Arrays.asList(arr[1])); needs.add(0,3);needs.add(1,2); ShoppingOffers so = new ShoppingOffers(); int res = so.shoppingOffers(price, special, needs); System.out.println(res); } public int shoppingOffers(List < Integer > price, List < List < Integer >> special, List < Integer > needs) { return shopping(price, special, needs); } public int shopping(List < Integer > price, List < List < Integer >> special, List < Integer > needs) { int j = 0, res = dot(needs, price); for (List < Integer > s: special) { ArrayList < Integer > clone = new ArrayList < > (needs); for (j = 0; j < needs.size(); j++) { int diff = clone.get(j) - s.get(j); if (diff < 0) break; clone.set(j, diff); } if (j == needs.size()) res = Math.min(res, s.get(j) + shopping(price, special, clone)); } return res; } public int dot(List < Integer > needs, List < Integer > price) { int sum = 0; for (int i = 0; i < needs.size(); i++) { sum += needs.get(i) * price.get(i); } return sum; }} key本题目采用动态规划的思路,我们带入测试样例1的 1234>Input: [2,5], [[3,0,5],[1,2,10]], [3,2]>即A=$2,B=$5>3A=5$,1A+2B=10$>需购买3A+2B 尝试 price 1:单买 16 2单买302套餐,还差2个B,则先算出2B的res为10,先试305套餐,A买超了,则退出305套餐,此时还有1210套餐,A买多了,退出套餐,两个套餐试完了,得到了单买两个B,$10的套餐,总价就为15元, 15(覆盖16) 3单买1210套餐,还差2A,0B,费用目前10元,先单买2A,费用4元,总价14元,然后先尝试305套餐,发现超,然后再试1210套餐,发现B超了,得到目前最低费用为14元 14(覆盖15) 问题的关键就在于clone的精髓之处,用来记录还需要多少零件的个数,使用递归,进行操作。如果不符合,(如买超了)直接break后,重新计算clone,直到special方法都试完了,然后才返回,如果一直都是break的状态则会返回单买的价格。 perfect123456789101112131415161718192021222324252627282930313233343536373839404142class Solution { private Integer res; public int shoppingOffers(List<Integer> price, List<List<Integer>> special, List<Integer> needs) { res=Integer.MAX_VALUE; int[] parr=new int[price.size()]; int[] aarr=new int[needs.size()]; for(int i=0;i<parr.length; i++){ parr[i]=price.get(i); aarr[i]=needs.get(i); } findMinimum(special, 0, aarr, parr, 0); return res; } private void findMinimum(List<List<Integer>> special, int curOffer, int[] remain, int[] single, int total){ if(total>=res||curOffer==special.size()) return; int buyNow=buySingle(remain, single, total); if(buyNow<res) res=buyNow; int[] newRemain=remainAfterUse(special.get(curOffer), remain); if(newRemain!=null) findMinimum(special, curOffer, newRemain, single, total+special.get(curOffer).get(remain.length)); findMinimum(special, curOffer+1, remain, single, total); } private int[] remainAfterUse(List<Integer> special, int[] remain){ int[] res=new int[remain.length]; for(int i=0;i<remain.length;i++){ res[i]=remain[i]-special.get(i); if(res[i]<0) return null; } return res; } private int buySingle(int[] remain, int[] single, int total){ for(int i=0; i<remain.length; i++){ total+=remain[i]*single[i]; } return total; } }","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[]},{"title":"Node10值得关注的升级","slug":"2019-02-23-Node10值得关注的升级","date":"2019-02-23T07:12:13.000Z","updated":"2021-07-27T07:09:41.873Z","comments":true,"path":"2019/02/23/2019-02-23-Node10值得关注的升级/","link":"","permalink":"http://zehai.info/2019/02/23/2019-02-23-Node10%E5%80%BC%E5%BE%97%E5%85%B3%E6%B3%A8%E7%9A%84%E5%8D%87%E7%BA%A7/","excerpt":"","text":"http2优势 更有效的网络利用率 引入 HTTP Header 压缩减小报文体积 在同一个连接中支持多路并发 支持 Server Push 多路复用(Multiplexing)由于HTTP连接,起初有要求限制同一域名下的请求有数量限制,超过则被阻塞,而HTTP2可以发起多重请求,如同时请求样式文件和脚本文件 二进制分帧 HTTP/2 通过让所有数据流共用同一个连接,可以更有效地使用 TCP 连接,让高带宽也能真正的服务于 HTTP 的性能提升。 http2.0的格式定义更接近tcp层的方式,这张二机制的方式十分高效且精简。length定义了整个frame的开始到结束,type定义frame的类型(一共10种),flags用bit位定义一些重要的参数,stream id用作流控制,剩下的payload就是request的正文了。 server Pushhttp2.0能通过push的方式将客户端需要的内容预先推送过去 首部压缩(Header Compression) BigInt *fs.mkdir 和 fs.mkdirSync 支持递归参数 CLI Flag 自动补全 Windows 安装包优化","raw":null,"content":null,"categories":[],"tags":[]},{"title":"Egg支持JS智能提醒","slug":"2019-02-23-Egg支持JSTS智能提醒","date":"2019-02-23T06:35:30.000Z","updated":"2021-07-27T07:09:41.873Z","comments":true,"path":"2019/02/23/2019-02-23-Egg支持JSTS智能提醒/","link":"","permalink":"http://zehai.info/2019/02/23/2019-02-23-Egg%E6%94%AF%E6%8C%81JSTS%E6%99%BA%E8%83%BD%E6%8F%90%E9%86%92/","excerpt":"","text":"本文章思路来自https://zhuanlan.zhihu.com/p/56780733 定位其实由于Egg本身的动态加载机制,所以JavaScript很难去做智能提醒(如变量定义检查),本次借鉴TS的动态生成d.ts,使用ts的Declaration Merging(声明合并)特性,读取JSDoc注释。 获取 更新egg-bin模块 package.json 添加 “egg”: { “declarations”: true } 实操,升级个人GitHub项目chum,执行 npm i egg-bin 将其从4.9.0–>4.11.0并在package.json的尾部加上上述egg的kv,在根目录下生成tpyping文件夹,将app目录下的controller,model,以及根目录下的index,config目录都进行了ts文件生成 其实egg原生支持JavaScript,对于TS只是支持不推荐的态度,并没有使用TS去重构,本次智能提醒,应该是对JS一个劣势的补齐,解决方案也似乎借鉴了TS的方式,但又保留了人们书写JS的习惯","raw":null,"content":null,"categories":[{"name":"framework","slug":"framework","permalink":"http://zehai.info/categories/framework/"}],"tags":[{"name":"egg","slug":"egg","permalink":"http://zehai.info/tags/egg/"}]},{"title":"ThreeSum","slug":"2019-02-19-15ThreeSum","date":"2019-02-19T14:34:18.000Z","updated":"2021-07-27T07:09:41.872Z","comments":true,"path":"2019/02/19/2019-02-19-15ThreeSum/","link":"","permalink":"http://zehai.info/2019/02/19/2019-02-19-15ThreeSum/","excerpt":"","text":"Problem123456789101112131415Given an array nums of n integers, are there elements a, b, c in nums such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.Note:The solution set must not contain duplicate triplets.Example:Given array nums = [-1, 0, 1, 2, -1, -4],A solution set is:[ [-1, 0, 1], [-1, -1, 2]] Solution123456789101112131415161718192021222324252627282930313233343536373839public class Solution { public List<List<Integer>> threeSum(int[] nums) { Arrays.sort(nums); ArrayList<List<Integer>> res = new ArrayList<List<Integer>>(); for(int i = 0; i < nums.length - 2; i++){ // 跳过重复元素 if(i > 0 && nums[i] == nums[i-1]) continue; // 计算2Sum ArrayList<List<Integer>> curr = twoSum(nums, i, 0 - nums[i]); res.addAll(curr); } return res; } private ArrayList<List<Integer>> twoSum(int[] nums, int i, int target){ int left = i + 1, right = nums.length - 1; ArrayList<List<Integer>> res = new ArrayList<List<Integer>>(); while(left < right){ if(nums[left] + nums[right] == target){ ArrayList<Integer> curr = new ArrayList<Integer>(); curr.add(nums[i]); curr.add(nums[left]); curr.add(nums[right]); res.add(curr); do { left++; }while(left < nums.length && nums[left] == nums[left-1]); do { right--; } while(right >= 0 && nums[right] == nums[right+1]); } else if (nums[left] + nums[right] > target){ right--; } else { left++; } } return res; }} Keytips:很久没有写Java了,花了点时间去整理了一些知识,所以上面的算法其实是ctrl+v的,现在整理一下list相关的知识: 1.List<List>为嵌套的list集合,声明方式 List<List> list = new Array() or List<List> list = new ArrayList<>();//recomend 2.List是一个接口,而ArrayList是List接口的一个实现类 List list = new List();//是错误的用法 List list = new ArrayList();//list会丢失ArrayList的trimToSize()方法 ArrayList list=newArrayList() 3.然后明天再回来重新写这道题","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[]},{"title":"缓存","slug":"2019-02-19-缓存","date":"2019-02-19T12:12:14.000Z","updated":"2021-07-27T07:09:41.872Z","comments":true,"path":"2019/02/19/2019-02-19-缓存/","link":"","permalink":"http://zehai.info/2019/02/19/2019-02-19-%E7%BC%93%E5%AD%98/","excerpt":"","text":"缓存why 高性能 例如:把查完的值缓存,下次直接访问 高并发 例如:把请求排队 difference(vs memcached) 特征 redis memchched 数据结构 更复杂的数据结构,更丰富的数据操作 集群 支持 不支持 性能 单核 多核 redis线程模型redis 内部使用文件事件处理器 file event handler,这个文件事件处理器是单线程的,所以 redis 才叫做单线程的模型。它采用 IO 多路复用机制同时监听多个 socket,根据 socket 上的事件来选择对应的事件处理器进行处理。 假设一个 Redis 服务器正在运作, 那么这个服务器的监听套接字的 AE_READABLE 事件应该正处于监听状态之下, 而该事件所对应的处理器为连接应答处理器。 如果这时有一个 Redis 客户端向服务器发起连接, 那么监听套接字将产生 AE_READABLE 事件, 触发连接应答处理器执行: 处理器会对客户端的连接请求进行应答, 然后创建客户端套接字, 以及客户端状态, 并将客户端套接字的 AE_READABLE 事件与命令请求处理器进行关联, 使得客户端可以向主服务器发送命令请求。 之后, 假设客户端向主服务器发送一个命令请求, 那么客户端套接字将产生 AE_READABLE 事件, 引发命令请求处理器执行, 处理器读取客户端的命令内容, 然后传给相关程序去执行。 执行命令将产生相应的命令回复, 为了将这些命令回复传送回客户端, 服务器会将客户端套接字的 AE_WRITABLE 事件与命令回复处理器进行关联: 当客户端尝试读取命令回复的时候, 客户端套接字将产生 AE_WRITABLE 事件, 触发命令回复处理器执行, 当命令回复处理器将命令回复全部写入到套接字之后, 服务器就会解除客户端套接字的 AE_WRITABLE 事件与命令回复处理器之间的关联。","raw":null,"content":null,"categories":[{"name":"high_availability","slug":"high-availability","permalink":"http://zehai.info/categories/high-availability/"}],"tags":[]},{"title":"联合索引","slug":"2019-02-19-联合索引","date":"2019-02-19T08:27:23.000Z","updated":"2021-07-27T07:09:41.873Z","comments":true,"path":"2019/02/19/2019-02-19-联合索引/","link":"","permalink":"http://zehai.info/2019/02/19/2019-02-19-%E8%81%94%E5%90%88%E7%B4%A2%E5%BC%95/","excerpt":"","text":"key当具备多个索引的时候,如:KEY 联合索引 (a,b,c)为索引,除(b,c)条件索引不会触发该索引表外,(a,b),(a,c),(a,b,c)均会触发上述联合索引,具体可参见explain的key类型,理论应该显示联立索引 如: EXPLAIN SELECT FROM TABLENAME WHERE a=’2222’ AND b=‘222’* 如果你设置多个单列索引,在explain下,key的值就为其单列的索引,如上述的a列","raw":null,"content":null,"categories":[{"name":"database","slug":"database","permalink":"http://zehai.info/categories/database/"}],"tags":[{"name":"sql","slug":"sql","permalink":"http://zehai.info/tags/sql/"}]},{"title":"树的后序遍历","slug":"2019-02-19-树的后序遍历","date":"2019-02-19T07:38:13.000Z","updated":"2021-07-27T07:09:41.872Z","comments":true,"path":"2019/02/19/2019-02-19-树的后序遍历/","link":"","permalink":"http://zehai.info/2019/02/19/2019-02-19-%E6%A0%91%E7%9A%84%E5%90%8E%E5%BA%8F%E9%81%8D%E5%8E%86/","excerpt":"","text":"definition1234567891011121314151617181920private static class BinaryNode<AnyType>{ BinaryNode(AnyType theElement) { this(theElement, null, null); } BinaryNode(AnyType theElement, BinaryNode<AnyType> lt, BinaryNode<AnyType> rt) { element = theElement; left = lt; right = rt; } AnyType element; BinaryNode<AnyType> left; BinaryNode<AnyType> right;}private BinaryNode<AnyType> root; posOrder123456789public void posOrder(BinaryNode<AnyType> Node) { if (Node != null) { posOrder(Node.left); posOrder(Node.right); System.out.print(Node.element + " "); } } 1234567891011121314151617181920212223242526272829public void posOrder(BinaryNode<AnyType> Node){ Stack<BinaryNode> stack1 = new Stack<>(); Stack<Integer> stack2 = new Stack<>(); int i = 1; while(Node != null || !stack1.empty()) { while (Node != null) { stack1.push(Node); stack2.push(0); Node = Node.left; } while(!stack1.empty() && stack2.peek() == i) { stack2.pop(); System.out.print(stack1.pop().element + " "); } if(!stack1.empty()) { stack2.pop(); stack2.push(1); Node = stack1.peek(); Node = Node.right; } }}","raw":null,"content":null,"categories":[{"name":"algorithm","slug":"algorithm","permalink":"http://zehai.info/categories/algorithm/"}],"tags":[]},{"title":"表内关联","slug":"2019-02-19-表内关联","date":"2019-02-19T07:37:35.000Z","updated":"2021-07-27T07:09:41.873Z","comments":true,"path":"2019/02/19/2019-02-19-表内关联/","link":"","permalink":"http://zehai.info/2019/02/19/2019-02-19-%E8%A1%A8%E5%86%85%E5%85%B3%E8%81%94/","excerpt":"","text":"Inner Join SELECT column_listFROM t1INNER JOIN t2 ON join_condition1INNER JOIN t3 ON join_condition2…WHERE where_conditions; Example id name parentid 1 北京市 0 2 海淀区 1 3 北京xx大学 2 select a.name 市,b.name 区,c.name 名from address ajoin address b on b.parentid = a.idjoin address c on c.parentid = b.idjoin address d on d.parentid = c.id","raw":null,"content":null,"categories":[{"name":"database","slug":"database","permalink":"http://zehai.info/categories/database/"}],"tags":[{"name":"sql","slug":"sql","permalink":"http://zehai.info/tags/sql/"}]},{"title":"LongestCommonPrefix","slug":"2019-02-18-LongestCommonPrefix","date":"2019-02-18T14:47:36.000Z","updated":"2021-07-27T07:09:41.872Z","comments":true,"path":"2019/02/18/2019-02-18-LongestCommonPrefix/","link":"","permalink":"http://zehai.info/2019/02/18/2019-02-18-LongestCommonPrefix/","excerpt":"","text":"Problem Write a function to find the longest common prefix string amongst an array of strings. If there is no common prefix, return an empty string "". Example 1: 12Input: ["flower","flow","flight"]Output: "fl" Example 2: 123Input: ["dog","racecar","car"]Output: ""Explanation: There is no common prefix among the input strings. Note: All given inputs are in lowercase letters a-z. solution12345678910111213141516171819202122232425262728293031323334class Solution { public String longestCommonPrefix(String[] strs) { int minLength = minLength(strs); String same = ""; boolean isSame=false; outer:for(int i=0;i<minLength;i++) { char sameCharacter = strs[0].charAt(i); isSame = false; for(int j=0;j<strs.length;j++) { if(strs[j].charAt(i)==sameCharacter) { isSame = true; }else { isSame = false; break outer; } } if(isSame=true)same+=sameCharacter; } return same; } private int minLength(String[] strs) { // TODO Auto-generated method stub if(strs.length==0) { return 0; } int min = strs[0].length(); for(int i=1;i<strs.length;i++) { if(strs[i].length()<min) { min = strs[i].length(); } } return min; }} key数学题,没什么关键,但是这个解法,还是存在优化空间 Perfect123456789101112131415161718192021class Solution { public String longestCommonPrefix(String[] strs) { if (strs == null || strs.length == 0) { return ""; } String prefix = strs[0]; for (int i = 1; i < strs.length; ++i) { while (!strs[i].startsWith(prefix)) { prefix = prefix.substring(0, prefix.length() - 1); if (prefix.isEmpty()) { break; } } } return prefix; }}","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[]},{"title":"ReverseInteger","slug":"2019-02-17-ReverseInteger","date":"2019-02-17T04:38:41.000Z","updated":"2021-07-27T07:09:41.871Z","comments":true,"path":"2019/02/17/2019-02-17-ReverseInteger/","link":"","permalink":"http://zehai.info/2019/02/17/2019-02-17-ReverseInteger/","excerpt":"","text":"ProblemGiven a 32-bit signed integer, reverse digits of an integer. Example 1: Input: 123Output: 321Example 2: Input: -123Output: -321Example 3: Input: 120Output: 21Note:Assume we are dealing with an environment which could only store integers within the 32-bit signed integer range: [−231, 231 − 1]. For the purpose of this problem, assume that your function returns 0 when the reversed integer overflows. Solution1234567891011121314151617181920class Solution { public int reverse(int x) { if (x == 0) { return 0; } int result =0; long result_l = 0; while (x != 0) { result_l = result_l * 10 + x % 10; x = x / 10; } if(result_l >= Integer.MAX_VALUE||result_l <= Integer.MIN_VALUE) { return 0; }else { result = (int) result_l; } return result; }} keys1.倒序很简单,取余赋给新数就可以了,不过注意JavaScript或者Python的int–>float的情况 2.题目下面其实提示了int的范围,改题目1032个测试数据,有大概7个是超范围的验证数据,所以java中可以巧利用Integer.MAX来进行处理。 perfect12345678public int reverse(int x) { long res = 0; while (x != 0) { res = res * 10 + x % 10; x = x / 10; } return (int)res == res ? (int)res : 0; }","raw":null,"content":null,"categories":[{"name":"LeetCode","slug":"LeetCode","permalink":"http://zehai.info/categories/LeetCode/"}],"tags":[{"name":"Easy","slug":"Easy","permalink":"http://zehai.info/tags/Easy/"}]},{"title":"如何生成tag和categories","slug":"2019-02-17-如何生成tag和categories","date":"2019-02-17T04:37:23.000Z","updated":"2021-07-27T07:09:41.871Z","comments":true,"path":"2019/02/17/2019-02-17-如何生成tag和categories/","link":"","permalink":"http://zehai.info/2019/02/17/2019-02-17-%E5%A6%82%E4%BD%95%E7%94%9F%E6%88%90tag%E5%92%8Ccategories/","excerpt":"","text":"12hexo new page tagshexo new page categories","raw":null,"content":null,"categories":[{"name":"others","slug":"others","permalink":"http://zehai.info/categories/others/"}],"tags":[]},{"title":"为什么使用消息队列MQ","slug":"2019-01-24-为什么使用消息队列MQ","date":"2019-01-24T10:24:15.000Z","updated":"2021-07-27T07:09:41.871Z","comments":true,"path":"2019/01/24/2019-01-24-为什么使用消息队列MQ/","link":"","permalink":"http://zehai.info/2019/01/24/2019-01-24-%E4%B8%BA%E4%BB%80%E4%B9%88%E4%BD%BF%E7%94%A8%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97MQ/","excerpt":"","text":"从实习到后来的两份工作也写了不少的项目,在最近的一份工作用到了大量的消息队列(客服系统,会有大量的访客咨询消息),让我重新回顾了一下在大数据面前,为什么要用消息队列,怎么用好消息队列 理由 解耦 异步 削峰 解耦通过一个 MQ,Pub/Sub 发布订阅消息这么一个模型,不同微服务之间通信会更加解耦,A给BCDEF发送消息的时候,就不需要考虑他们是否宕机,如何重发等,只需要将信息发送到队列里,让他们自己去取就好了 异步假设用户请求需要写表,那么吧任务放进队列里,等待写入,前端可以先返回,可以减少用户的等待时间,或者采用多个机器同时写数据的不同部分,加快数据的处理 削峰就和平时用电一样,晚上电网的压力肯定会很大,如果直接把大量请求压到服务器,会直接宕机,但如果把请求排成队列,然后服务器从里面顺序取,虽然会增加延迟,但是不会宕机,满负荷运作而已 实际生产环境咨询系统大致分为:咨询核心,端模块,微信模块,分配模块等等,访客发送的咨询信息(web)可能先经过端模块,在咨询核心模块处理前进入队列,然后,分配模块根据用户的设置,如接入客服还是机器人,按什么权重进行分配,分配给哪一个业务组进行操作,来减轻咨询核心的压力 缺点1.系统可用性降低(MQ挂了咋整) 2.复杂度提升(消息没有重复消费,不会丢失) 3.一致性问题有待解决 特性 ActiveMQ RabbitMQ RocketMQ Kafka 单机吞吐量 万级,比 RocketMQ、Kafka 低一个数量级 同 ActiveMQ 10 万级,支撑高吞吐 10 万级,高吞吐,一般配合大数据类的系统来进行实时数据计算、日志采集等场景 topic 数量对吞吐量的影响 topic 可以达到几百/几千的级别,吞吐量会有较小幅度的下降,这是 RocketMQ 的一大优势,在同等机器下,可以支撑大量的 topic topic 从几十到几百个时候,吞吐量会大幅度下降,在同等机器下,Kafka 尽量保证 topic 数量不要过多,如果要支撑大规模的 topic,需要增加更多的机器资源 时效性 ms 级 微秒级,这是 RabbitMQ 的一大特点,延迟最低 ms 级 延迟在 ms 级以内 可用性 高,基于主从架构实现高可用 同 ActiveMQ 非常高,分布式架构 非常高,分布式,一个数据多个副本,少数机器宕机,不会丢失数据,不会导致不可用 消息可靠性 有较低的概率丢失数据 基本不丢 经过参数优化配置,可以做到 0 丢失 同 RocketMQ 功能支持 MQ 领域的功能极其完备 基于 erlang 开发,并发能力很强,性能极好,延时很低 MQ 功能较为完善,还是分布式的,扩展性好 功能较为简单,主要支持简单的 MQ 功能,在大数据领域的实时计算以及日志采集被大规模使用 所以中小型公司,用 RabbitMQ 是不错的选择 大型公司,基础架构研发实力较强,用 RocketMQ 是很好的选择 如果是大数据领域的实时计算、日志采集等场景,用 Kafka 是业内标准的,绝对没问题,社区活跃度很高,绝对不会黄,何况几乎是全世界这个领域的事实性规范。","raw":null,"content":null,"categories":[{"name":"high_availability","slug":"high-availability","permalink":"http://zehai.info/categories/high-availability/"}],"tags":[{"name":"MQ","slug":"MQ","permalink":"http://zehai.info/tags/MQ/"}]},{"title":"递归优化","slug":"2019-01-23-递归优化","date":"2019-01-23T03:54:43.000Z","updated":"2021-07-27T07:09:41.870Z","comments":true,"path":"2019/01/23/2019-01-23-递归优化/","link":"","permalink":"http://zehai.info/2019/01/23/2019-01-23-%E9%80%92%E5%BD%92%E4%BC%98%E5%8C%96/","excerpt":"","text":"递归优化原因:在 Java 中,每个线程都有独立的 Java 虚拟机栈。栈具有后入先出的特点,递归调用也是需要后调用的方法先返回,因此使用栈来存储递归调用的信息。这些信息存储在栈帧中,每个 Java 方法在执行时都会创建一个栈帧,用来存储局部变量表、操作数栈、常量池引用等信息。在调用方法时,对应着一个栈帧入栈,而方法返回时,对应着一个栈帧出栈。 随着栈帧frame的增多,将会导致Stack Overflow的报错,例如 1234567int f(int i){ if(i == 1 || i == 2) return 1; else return (f(i - 1) + f(i - 2));} 解决方法1:递归–>非递归其实很简单,就是用一个临时变量,来保存中间的值,而不是压入堆栈中, 1234567891011//费波纳列数列,前两位是1,之后没位数是前两位数的和private static void fibonacci(int n) { int temp1=1,temp2=1,temp; for (int i = 1; i <=n ; i++) { temp=temp1+temp2; temp1=temp2; temp2=temp; } System.out.println();}//粘贴于网上 解决办法2:递归–>尾递归尾递归就是当函数在最后一步(尾部)调用自身,如: 123function f(x){ return g(x);} 以下算法来自阮一峰教程: 123456function factorial(n) { if (n === 1) return 1; return n * factorial(n - 1);}factorial(5) // 120 该算法并非是尾递归,因为其在返回值的时候进行了一个乘法操作,所以还是普通的递归,复杂度为O(n),而如果改成尾递归,则: 123456function factorial(n, total) { if (n === 1) return total; return factorial(n - 1, n * total);}factorial(5, 1) // 120 该算法只需要计算 factorial(5,1) factorial(4,5) factorial(3,20) factorial(2,60) factorial(1,120) 在进入新的递归函数时,尾递归不再需要使用栈帧保存数据,允许抛弃旧的栈帧,那么只需要保存一个栈帧即可 参考资料: [阮一峰尾递归](http://www.ruanyifeng.com/blog/2015/04/tail-call.html)","raw":null,"content":null,"categories":[{"name":"algorithm","slug":"algorithm","permalink":"http://zehai.info/categories/algorithm/"}],"tags":[{"name":"regreesion","slug":"regreesion","permalink":"http://zehai.info/tags/regreesion/"}]}]}